Hacker Newsnew | past | comments | ask | show | jobs | submit | shilch's commentslogin

This is not at all what this article describes. You're making the same mistake as truth_machine does: The loop unrolling is not part of the proof but just an implementation detail for the transition lookup; it could as well have been a large if-else. The actual looping / transitioning is implemented via bitcoin transactions which is a new technique that was only recently developed.


> The loop unrolling

Unrolling once is still what I'm referring to there.

> a new technique that was only recently developed.

That's simply false. E.g. search for 'covenant' or 'recursive covenant'. It's also described on the mailing list as far back as 2011.

(And, incidentally, the first example of a turing complete machine controlling the release of a transaction on Bitcoin was in 2016 and was vastly more efficient and private than the the approach used by the post-- https://bitcoincore.org/en/2016/02/26/zero-knowledge-conting... )


The proof in the article is not about unrolling.

Whatever I read regarding "covenant" requires a protocol change. Independent of who described the underlying technique first, this doesn't invalidate the proof.


> The proof in the article is not about unrolling.

Sure it is. The state machine expressed in the script checks one (or more) steps of the update rule. That's what we mean by unrolling, not the "or more" part but the fact that the script is just running a simple circuit for a fixed operation.

> doesn't invalidate the proof

Proof of what? It's not a proof of script being turing complete. If you're claiming that it's that-- it's invalid on its face.

If you're saying it's a proof that script can implement a static state machine that runs one or more steps at a time checking some transcript computed by an external process and check consistency of state using the outputs-- then sure, that's not news, nor controversial, it's been known almost all of the system's life.


I don't know what you're up to; your account is 30min old, was created just for commenting on this post, you appear to be very aware of this project and your comments show that you didn't understand the solution presented in the article. I already commented on the parent comment: The `loop` in sCrypt is a compile-time loop and just an implementation detail for the lookup in the transition table. The *actual looping* happens outside via bitcoin transactions whereby each transaction is a transition in the TM. https://news.ycombinator.com/item?id=28587465


I am very familiar with Bitcoin script, and it was rather easy to confirm that BSV is using the same set of opcodes, and sCrypt compiles to bitcoin script, with obvious conclusions. So I think that I actually understand the topic (and the article) very well, thank you very much.

What am I up to? My beef with the article is quite simple: the article is clearly written with a singular goal in mind, to claim that "Bitcoin is turing complete", with is trivially verifiable falsehood, so I failed to resist "someone is wrong on the internet" impulse. Are you implying that I am arguing in the bad faith?

There seem to be many other article by the same author making the same claim, with equally tenuous "proofs": https://xiaohuiliu.medium.com/play-conways-game-of-life-on-b... and https://xiaohuiliu.medium.com/turing-complete-rule-110-on-bi...

So the question should rather be "what is he up to?". Probably just a promotion for his language or clickbait titles.


Then you either misunderstand or (intentionally?) misrepresent the article. The `loop` construct you're talking about has nothing to do with the author's proof, and neither does it "essentially use bitcoin blockchain as a database" as you have written somewhere else in the thread. For the interested reader, I have commented on these points where they were brought up.

To explain the Game of Life contract you're linking to: The `loop`, again, is just an implementation detail to go over each field on the board in a *single* transaction. It's not part of any proof. The actual turing-complete element - letting the GoL run - happens outside: Each generation state transfer happens via a bitcoin transaction.


It seems to me that we actually agree on the main points.

I do agree with you that single contract transaction is not turing-complete. I also agree that turing-complete element happens outside.

My disagreement is with the following:

1. I disagree that "loop is just an implementation detail and is not part of any proof". In the GoL article there is a claim that (a)Game of Life board could simulate a turing machine and (b)article provides implementation of GoL board in sCrypt, therefore "Bitcoin in turing-complete". However, the board in article is limited (due to loop inlining) and cannot be made 1000s x 1000s (as required for the simulation of the turing machine) precisely because of the loop unrolling. I also note that author does not point out this limitation (in any of his articles, it seems) - the claims are always "we can simulate Game of Life, we can do Machine Learning, we can simulate Rule 110 automata" without any mention that these are toy examples that hardly do anything and can't scale even by an order of magnitude. So loop is indeed an implementation detail, but quite essential one, it seems.

2. More broadly, I am agruing against the claim that "Bitcoin is turing complete" made in this and other articles by the same author. But, again, it seems that on this point we are actually in agreement


The loop unrolling is an implementation detail. As I have written somewhere else: You could as well have put a large if-else there to do the transition table lookup.

In the GoL example, one generation update is performed in every transaction. Since the board size is known in advance, it totally makes sense to unroll the loop. If you want to do boards larger than that, like the 1000x1000 you mention, you run into script limits. What you can then do instead is, for example, to update the first half of the board in one transaction and then the second half in another transaction, i.e. two transactions are one generation.

Bitcoin is turing complete as shown in this article.


Sorry, but the (max) board size is not known in advance. Board grows as the Turing machine simulator is running.

> What you can then do instead is, for example, to update the first half of the board in one transaction and then the second half in another transaction, i.e. two transactions are one generation

Yes, we just need to add the external "driver" that partitions the board, determines the bounds of board parts, constructs transactions that do all the necessary work of updating the board, checks whether the computation is finished - in other words, does the hard work of partitioning the computation in the chunks of fixed predetermined size, which is only necessary because bitcoin script is not turing-complete.


> because bitcoin script is not turing-complete

This is not what the article claims to show. It shows that the system bitcoin is turing complete.

Added to this, having an "external driver" and the system being turing-complete are not mutually exclusive.


> having an "external driver" and the system being turing-complete are not mutually exclusive.

In many cases, they are. For instance, if you tack on an already-Turing-complete component to the existing "system", then it's disingenuous to call the existing system Turing complete.

Further, this "covenant"-style system isn't really what Turing had in mind when he spoke of an "automatic machine".

Further further, this has nothing to do with Craig Wright, who famously called Script itself Turing complete in a plagiarized paper, and said that the "alt stack" is critical for it to be Turing complete.

Nonsense.


What is "bitcoin system"?


I might write an article that explains it (if I find the time to do so). The thread form of back and forth is fruitless since it appears that we don't have a common understanding of what makes bitcoin turing complete. Let me know how I can reach you back if I have written such article (and you're interested).


If you have written such an article and it is sound, I would imagine that I would hear it all during your Alan Turing award lecture :)

But you can just post it in the comment here, I will see it



These are two separate things. The `loop` construction / function or whatever it is called in the sCrypt language is a compile-time loop. That is, the body gets unrolled N times (8 in this example). It's just an implementation detail for the lookup in the transition table. However, that is not part of a proof. The author is pretty clear that each transition in the TM is implemented as a bitcoin transaction.


The presented solution does *not* loop inside bitcoin script itself, as you suggest, but outside whereby bitcoin transactions perform the state transfer. The machine runs for as long as someone pays for it (or it goes into accepting state) - which makes sense because if there was a one-time-fee for unbounded or potentially infinite runtime, you could create a program that never terminates. This can be compared to Ethereum where every step in execution costs fees and the caller needs to ensure that a sufficient amount of fees (gas) is paid.


Well, in Etherium, provided that sufficient amount of gas is paid for, I could have a contract that implements several (many?) iterations of the Turing machine - or any other computation.

With the approach proposed in the article I need to have an external Turing-complete "controller" that would keep calling the contract.

At this point, what is the benefit I am getting from having this "contract" at all? I would be better off with a just serializing the state of my machine and putting it into OP_RETURN, getting a much smaller blob to store on the chain. So I will save on the fees, could implement my Turing machine (or anything else, really) in the language of my choice, not constrained by the absence of loops and function calls.

Article essentially uses bitcoin blockchain as a database (I hesitate to use the word "ledger"), and the use of "contract" is just a gimmick, seemingly introduced just to prop the absurd claim that bitcoin somehow becomes turing-complete when external turing-complete controller performs "contract calls".


In Ethereum, the caller specifies the gas amount beforehand to ensure that the execution finishes. In the presented bitcoin-based solution, the caller prepares the transactions beforehand that finish the execution; it then publishes the transactions.

> At this point, what is the benefit I am getting from having this "contract" at all? I would be better off with a just serializing the state of my machine and putting it into OP_RETURN, getting a much smaller blob to store on the chain. > Article essentially uses bitcoin blockchain as a database

This is just plain wrong and not at all what this article is about. In the article, a script is developed that enforces state transfer by the specified transition table, i.e. only a specific set of bitcoin transactions are allowed on the state, namely the ones from the transition table.


> This is just plain wrong and not at all what this article is about. In the article, a script is developed that enforces state transfer by the specified transition table, i.e. only a specific set of bitcoin transactions are allowed on the state, namely the ones from the transition table.

So what is the article about, then? I started this thread disagreeing with the claim that material presented in the article somehow makes bitcoin turing complete and claiming that it, in fact, is not. You seem to be arguing this point with me, but I am not exactly sure what your (counter)arguments are.


I was aware of xhliu's work before. It's actually a big thing because Ethereum was created with the assumption that bitcoin's scripting language does not allow for state transfers (stateful contracts) and turing-complete computations. The advantage over account-based systems like Ethereum is scalability: Ethereum contracts are one central entity identified by their address, any access on the contracts must be serialised which leads to scaling issues with popular contracts. Bitcoin's UTXO model on the other hand is rather easy to parallelize; but scalable UTXO-based contracts also require a different design.


The solution presented in the article uses the bitcoin protocol. BSV is just one implementation of that. It could also work on BCH if some default limits were lifted, and (theoretically, not economically) possible on BTC.


BSV and Bitcoin (BTC) are incompatible protocols, so this is just a straight out lie. Sure, it can be added to BTC as long as there's a use case and concencus for it, but so far there isn't.


> BSV and Bitcoin (BTC) are incompatible protocols

Both implementations have a large intersection. Dogecoin, Litecoin and Dash also use the bitcoin protocol and should also work.

> Sure, it can be added to BTC

The presented solution does not require a protocol change. BSV is essentially the original bitcoin protocol (with few small exception); nothing was added to the protocol to implement the turing machine. BTC has some of the scripting opcodes disabled but they can either be implemented using other opcodes (no change) or they could be reenabled as a new SegWit deployment (requires change). However, the issue you'll run into in practice is that the reference implementation (Bitcoin Core) has very small limits set and discourages scripting. You'll have to find a miner that mines the custom scripting transactions. And, well, fees are another issue.


> It could also work on BCH if some default limits were lifted

Can you list what those default limits are, and how big the needed changes are?


I just wrote the comment and noticed that the scripting limits on BCH are actually a network rule and are not miner-configurable. So my statement that "it could also work on BCH if some default limits were lifted" is only correct in the sense that this number needs to be increased in a network upgrade: https://gitlab.com/bitcoin-cash-node/bitcoin-cash-node/-/blo...


MacOS doesn't have jails as FreeBSD does. But they are using some kind of isolation for Mac applications, so they can not see data of other applications.


(A port of) Visual Studio Code is available for FreeBSD (using it myself); `pkg install vscode`. If you would like to use docker, you could run a Linux distro in a virtual machine with docker daemon and configure your host docker command to use the daemon inside the VM. That's basically how docker works on macOS.


For docker on freebsd to get any traction, it would need to be implemented with the vm behind the scenes. The beauty of the macOS/windows versions are that they require 0 setup/maintenance and are an implementation detail.


It seems like this should be relatively simple; build docker-cli as a native FreeBSD program, make a docker-machine backend for bhyve... port docker-machine to FreeBSD, I guess (I hope that's not hard; it's not doing anything OS-specific), and you should be good to go.


>docker on freebsd to get any traction

Yeah no thanks....run bhyve..with buntu and than your docker...have (no) fun.

That's "docker" for freebsd:

https://bastillebsd.org/


Wow... bastille is a pretty capable container manager!


Yes! I love to work with it.


Like computer scientist, they think binary: Either it's secure, or it's not. In reality there's a spectrum where you also have "good enough".


"good enough" relies on a threat model. Cryptography researchers work in the abstract - without a threat model you must consider cases where your attacker has unlimited resources.

It's good enough for you and me, but research isn't meant to be practical, imo


What. The first thing any security paper defines is the assumed threat model. People design all kinds of schemes for different threat models.

The point with assuming conservative threat models for key primitives like hash functions is that the threat model can change rapidly even within the same application, and attackers only get stronger. So you err on the side of caution, and don't rely on luck to keep safe.


The security strategy with the most practical utility for most software engineers working on most projects is defense in depth: multiple layers, each assumed to be breakable.

It's a striking contrast with the stark mathematical language deployed by cryptographers, on whose work we rely.

If we differentiate between the two fields of software engineering and cryptography, it's easier to be generous in our appreciation for the different goals and mental models.


Most people think in yes/no logic. Unfortunately binary is a horrible oversimplification of a very analog reality and results in many of the world's problems that we're in now. Because we tend to think of a binary of yes/no, we often end up flying from one ditch on the side of the road to the other.


But in many contexts, "good enough" is more a question of perception than reality.

These statements serve to shift the Overton window of perception, and therefore help improve the odds that people are't thinking "good enough" when they are broken.


Clang also does some crazy-good SIMD optimizations if you set -march=native. But I think the Rust compiler is based on LLVM as well, right?


Everybody should build everything with a reasonable setting of march. Debian Linux still builds everything for K8, a microarchitecture that lacked SSE3, SSE4, AVX, AES, etc. Even building with march=sandybridge seems petty conservative and gives significant speed improvements in many common cases.


It is, yes.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: