• 1 Post
  • 13 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle


  • There’s only incentive to do that if the mempool is empty. If the mempool is full, there will be plenty of transactions for both the first miner and the next miner.

    Wait… This entire paper only makes sense if the mempool is near empty. If the mempool is full, then there is no reason to mine an empty/partial block because there will always be transactions left for future miners.

    So basically:

    • Mempool full = miner would mine full blocks just like intended.
    • Mempool empty = miner would mine empty blocks but that isn’t a problem because there are no transactions to process in the mempool.


  • I think we generally agree but I just want to clarify anyways. I’m not saying we should use PNG to store frames from videos.

    What I am saying however, is that we should replace PNG with a modern lossless image format that is more flexible so users don’t have to deal with these issues. All this colorspace stuff should be automatically handled and I shouldn’t have to worry about it not being lossless. If I want to save a frame of video, I should be able to do it using an image format that everybody recognizes and accepts, it should not be a huge hassle and it should be fully lossless.







  • it just predicts the next word out of likely candidates based on the previous words

    An entity that can consistently predict the next word of any conversation, book, news article with extremely high accuracy is quite literally a god because it can effectively predict the future. So it is not surprising to me that GPT’s performance is not consistent.

    It won’t even know it’s written itself into a corner

    It many cases it does. For example, if GPT gives you a wrong answer, you can often just send an empty message (single space) and GPT will say something like: “Looks like my previous answer was incorrect, let me try again: blah blah blah”.

    And until we get a new approach to LLM’s, we can only improve it by adding more training data and more layers allowing it to pick out more subtle patterns in larger amounts of data.

    This says nothing. You are effectively saying: “Until we can find a new approach, we can only expand on the existing approach” which is obvious.

    But new approaches come all the time! Advances in tokenization come all the time. Every week there is a new paper with a new model architecture. We are not stuck in some sort of hole.