Hacker Newsnew | past | comments | ask | show | jobs | submit | gendal's commentslogin

The amount wind farms in the UK have contributed back over the last ten years is a rounding error compared to how much they have received. It's not even close: https://x.com/7Kiwi/status/2031657347433603581

And the scary thing: the wind farms aren't even making that much money! Some projects have been cancelled and others had to re-bid in subsequent auctions to get a higher CFD price than they originally received because they couldn't make the economics work. Worse, there are reasons to believe they're not even fully provisioning for their end-of-life decommissioning costs.

The UK's energy policy is unbelievably destructive :(


Where’s his methodology?

How does he separate the CfD price from the market price that’s being set by renewables?

Where’s his evidence that using gas would be cheaper than renewables?


He explains on his website where he gets his data from. He gets it from The Low Carbon Contracts Company... y'know: the firm who is the actual counterparty to the CFDs and so should probably know the actual sums of cash being moved - and in which direction.

His January article: https://davidturver.substack.com/p/record-january-cfd-subsid...

LCCC's relevant data page: https://dp.lowcarboncontracts.uk/dataset/actual-cfd-generati...

The actual spreadsheet: https://dp.lowcarboncontracts.uk/dataset/8e8ca0d5-c774-4dc8-...

And note: even when gas is more expensive than the CFDs, the huge fixed and/or policy costs (network build-out, capacity market, curtailment, etc) are devastating.

The story would be completely different if wind farms were actually cheap to build and run... the problem is they're just not.

I wish it were not so... it would be great if we had a path to being free of dependence on hydrocarbons. But in a battle between wishful thinking and physical and economic reality, reality usually wins.

So we're faced with a choice as a nation: continue to pour tens of billions of pounds down this drain... or call time on the experiment and free up all that money for something productive?


Thanks… I’ll have a read through though I’m highly skeptical of anyone who’s a member of Toby Young’s Free Speech Union… it says a lot about their political leanings


Cheers. No doubt there's additional nuance I've missed but I'm fairly certain he's directionally correct. And, if he is, we face some dire consequences as a nation.

Re the Free Speech Union, that's an interesting one and perhaps points to a broader point. It often feels to me that there can often be an asymmetry of risk faced by participants in some highly charged debates. I know this is a cliche, but there is definitely something to the adage that "conservatives think progressives are stupid, but progressives think conservatives are evil".

So it doesn't surprise me at all that the FSU was founded by somebody from that 'side': If you're debating in an environment when some (I stress some) of the people who may read your writings may actually think you're evil, as opposed to just wrong, it seems rational to invest in some protection?

In any case, I don't know Turver, but I have no reason to believe he's making this stuff up. He seems pretty rational to me, and does share his working. I'd urge you to remain open minded to the (scary) possibility he's right.


Yes - the wholesale price of electricity can sometimes be very low (or negative), particularly when there is a lot of wind. And some tariffs pass that on to consumers. But I don't see how it works at scale.

This is because the wind farms don't get paid the wholesale price. They get paid their guaranteed, index-linked CFD strike price. This means that, for every £1/MWh drop in the wholesale price, they get an exactly matching extra £1/MWh to top them back up to their strike price. They can bid a low price into the market safe in the knowledge they'll get paid their CFD price.

And that top-up has to be paid by somebody: either other bill payers, or the taxpayer.

That wouldn't be so bad if the strike prices were low. But they're not. The recent "Allocation Rounds" guaranteed offshore wind farms in excess of GBP100/MWh, index linked for at least 15 years.

To put it in context, these numbers are higher than the wholesale gas price - even with an uplift for carbon externalities - for all but the worst period of the Ukraine invasion a few years back.

But it gets worse: on top of these extremely high fixed prices for wind, we also have to pay for the installation of tens of billions of pounds (if not more) of new grid connections, because the wind farms are nowhere near the centres of demand. This cost is also added to bills.

It doesn't end there. Readers may be aware that the wind doesn't always blow. Which means we need something that's able to spin up or down on demand. In the UK, that means gas. So we have the ridiculous situation of having to pay the gas plants to sit around doing nothing, just so we can call on them at minimal notice when needed. And remember: there can be long periods with basically zero wind or solar (the famous 'dunkelflaute' phenomenon in winter).

This means we need non-wind capacity pretty much equal to peak winter demand, in order to be safe during the week or so some years when there's no wind.

So we're paying to build and maintain TWO generation systems in parallel.

This is why electricity costs in the UK are on an ever-upwards trajectory: all these 'policy costs' are added to the wholesale price, and are a large and growing component of the _retail_ price that most consumers pay.

Depressingly, 'storage' doesn't fix this. Indeed, it's a fun exercise to calculate how much electrical energy is consumed in the peak of winter in the UK over a one- or two-week period and then figure out how much the necessary battery capacity would cost... or, even more fun, how many Welsh and Scottish valleys we'd need to flood to create the pumped-storage capacity. We're talking tens of trillions of pounds.

I fear the UK has, with the best of intentions, made a mistake of generation-defining proportions with its bet on wind :(


In other words, betting on wind (or sun) where there is little wind (and sun) is not an optimal choice. But folks on HN were telling otherwise. Who might’ve known…


If it’s all due to the reliance on wind and the floor price CfDs set why does the price spike when oil and gas prices rise?


It's a good question. To be honest, I'm still trying to get my head around how the UK electricity market works. Its complexity is definitely a big part of why so many reasonable people can end up disagreeing so vehemently... vanishingly few people understand how the whole thing works (and pretty much none of those who do are listened to by the politicians...)

Your question is good for another reason: you say "price" without qualifying whether you mean wholesale or retail (and, if retail, what individual households pay or what is experienced by industry). A lot of commentators and politicians routinely conflate the concepts to serve their own agendas in order to confuse non-experts.

If one looks first at the wholesale price, you're right that - in general - one would expect it to 'spike' when the gas prices shoot up. But on days when wind is dominant this has a minimal effect on retail prices, because the extra money paid to the wind farms (everybody gets the clearing price) is exactly offset by a reduction in the CfD payment. To repeat: consumers pay the same (high) price for most wind-generated electricity irrespective of the gas price.

So the interesting question, I think, is: what happens on days when the wind isn't blowing and gas generation is dominant? And here's the thing: if the price for gas-generated electricity (with carbon tax to account for the climate externality) is below the CfD strike prices, we're still ahead, even if it has spiked above its average. And because the CfD strike prices are so eye-wateringly high, this happens far more often than not.

Indeed, it was only for part of 2022 that the wholesale price was above the CfD prices and so the wind farms were paying money in to the system rather than taking out.

This chart from David Turver (who I learned a lot of this stuff from) is eye-opening in that regard: https://x.com/7Kiwi/status/2031657347433603581 (edited to provide clearer chart)

If the renewables fleet is supposed to be protecting us from gas price strikes, we're paying a VERY expensive premium for that insurance.


Not a "hard" bug but a useful lesson in any case. I worked on a set of stress tests for a major middleware product and came into the office on a Monday morning to check the 72-hour over-weekend runs. We were getting close to release date and things were settling down so I wasn't expecting anything major. Except they'd ALL failed. It took us far longer than I'd care to admit to figure out what had gone wrong - I wasn't working on it non-stop but I definitely remember it taking quite some time. I think it was a colleague who figured it out later that week.

Anyway, what had happened was that our Perl test harness was tracking time elapsed in the 72-hour run as seconds since the Unix epoch, but was comparing them using the lexicographical order operator (lt versus <). Everything worked until the time ticked over from 999,999,999 seconds to 1,000,000,000.

I just looked up those timestamps to check my memory, and I can now see why fixing it wasn't our top priority that week... the 999999999/1000000000 transition happened the weekend before 9/11.


Hi everybody... Richard Brown, CTO of R3, the firm behind Corda, here.

As Mike says, we're somewhat unusual in that there are many live, successful Corda deployments around the world. Mike's comment about how Corda is blockchain-like but not, strictly speaking, a chain of blocks is at the heart of this I think. Here's what I mean:

How many 'introductory' presentations have you been to (or, worse, given) to semi-technical people, where Bitcoin and other public blockchains are 'explained' by describing the components? You probably know the sort of pitch I mean: they laboriously build up the concepts - transactions, signatures, hashes, blocks, chains of blocks, mining, etc, etc. Such presentation are usually correct. But they impart almost no intuition. It's little wonder that so many 'business people' come away from them thinking that a blockchain is some sort of mysterious and magical technology.

There's a presentation I like to give where I go the other way. I give a one-line description of the problem Bitcoin solves [1] and then I help the audience 'invent' Bitcoin for themselves from first principles. There's an old blog post of mine that gives the rough idea [2]. The point is that successful architectures solve well-stated problems. Cargo culting never works.

Where I think many corporate deployments of blockchain went wrong was that they saw the huge enthusiasm for 'blockchain tech', had a vague intuition that 'inter-firm' or 'market-level' problems were ripe to be solved, but they never fully internalised that Bitcoin's (or Ethereum's) architecture isn't some sort of inviolable, handed-down-from-high blueprint... it's merely a very elegant engineering solution to a well-stated 'business problem'.

Yet many 'enterprise blockchain' platforms seemed to begin with the architecture of a public blockchain, and then tweaked it to make it palatable to businesses (eg engineering cumbersome privacy solutions on top to work around the inherently broadcast nature of the public chains). This always felt a bit weird to me.

With Corda, we were fortunate to have been given the time and space by our backers to write down our equivalent of the Bitcoin problem statement, and then to engineer a solution to that problem. Yes - of course, we knew we were in the 'inter-firm business process' space, and we knew the problem we were trying to solve was in some way 'blockchainy' But we did try really quite hard to write down the problem statement [3] and then go forward from there, rather than starting with a pre-existing architecture and then modifying it.

Yes - as Mike says, architecturally it looks a lot like Bitcoin (eg it has an unspent-transaction output data model). But it also has a bunch of other things that, to this day, no other Blockchain has... eg the Flow Framework that allows decentralised inter-firm workflows to be modelled (think temporal.io but without any centralised infrastructure). And, like Mike says, Corda passes data point-to-point and confirms each transaction one at a time - no blocks, no broadcast.

You would not believe HOW MUCH GRIEF we got from the blockchain community for that design in the early days. Yet - several years on, it seems to be working.

[1] I claim that the 'requirement' for which Bitcoin is the solution is: "build me a system of un-censorable digital cash."

[2] https://gendal.me/2014/05/21/bitcoin-mining-the-first-techno...

[3] Our problem statement? "Build me a platform that enables multiple firms to record and manage the lifecycle of the business contracts they have with each other, minimising the need for any new third parties." At least, that's what I thought I was building. Mike might disagree, however... I know he likes the "decentralised database" interpretation of Corda.


I think your interpretation is fine :) Maybe I was aiming for a fully generalized decentralized database, but in the end there's probably too many finance or business-specific concepts in the core model to really deserve the title. At least in Corda 4.

And I should have given you more credit in my post, apologies for that. Indeed Corda has successful 'blockchain' projects where other platforms often don't, largely because of your rigorous problem-definition-first mentality that kept us seeing Bitcoin/Ethereum as a bag of useful ideas rather than a template that we had to follow. It was a great collaboration which I enjoyed a lot, and the platform was very fundamentally shaped by your efforts and insights!


No need to layer on the flattery... I was hand-waver in chief (and like to think I did a good job in bringing people with us and creating the space for you and the team to work without too much distraction and noise), but it was you who actually brought all the pieces together into a coherent platform. There was nothing in your post I disagreed with!


Are there any MSR designs that require NO reprocessing (whether online, batch or other)?

I'd love to find actual data but, anecdotally, it seems like a large amount (vast majority?) of the nuclear legacy costs for countries like the UK, at least, come from the back-end - ie reprocessing. If you just store used rods (in perpetuity) near the reactor they were used in then the overall legacy footprint is pretty modest. Indeed, I think that's what the UK does now - our reprocessing facilities are now in decommissioning mode and the US did the same a long time ago.

Yes - part of the decision to stop reprocessing was proliferation risk. But I think it's also because reprocessing is so insanely messy and so easy to get wrong.

This is because if you 'reprocess' fuel, the waste problem just balloons... the rods have to be chopped up, dissolved in nitric acid, taken through a complex chemical process and you end up with vast amounts of liquid waste, various bits of undissolved gunk, a fiendishly difficult-to-decommission reprocessing facility and all the rest. Reprocessing plants are some of the most complicated chemical plants in the world... and when they go wrong (eg the UK's Thorp leak) they're almost impossible to repair owing to the radioactivity.

In the past, the purpose of the early reactors was to generate plutonium for weapons and so reprocessing was, in reality, the key activity, with the reactors just the tedious thing you had to build to provide feedstuff for this extraction process.

But if we don't want any new plutonium then there's no need to reprocess and the waste problem just becomes insanely easier.

To see what I mean, google the history of the UK's Sellafield (specifically the B.205 and Thorp plants) or Russia's Mayak or France's La Hague. So many leaks and accidents, all totally unnecessary if they hadn't been trying to reprocess the fuel. The idea of taking something small and stable (a rod) and turning it into a dangerous liquid and then trying to run it through a fiendishly complex chemical plant just seems nuts on its face.

Hence my question about MSRs... can you build one that doesn't require any of this tricky chemical engineering, whether 'online' or otherwise? If so, great. If not, why isn't this whole avenue just shut down as DOA?


> Are there any MSR designs that require NO reprocessing (whether online, batch or other)?

I think the Terrestrial IMSR (which is probably the one closest to commercializing in the West) is designed for a once-through uranium cycle, similar to current LWR plants. The idea, IIRC, is to replace the entire reactor vessel (including the fuel salt) every 7 years.

Not sure what the plan is for dealing with the spent fuel. If the fuel salt is water soluble (not sure, but salts tend to be, right?) I'd think some form of processing is necessary before geological disposal. But maybe that can be a cleaner and simpler process than a full PUREX.


There's a great (and insanely detailed) book on the rise and fall of Symbian by a guy who was there for most, if not all, of the journey:

https://www.amazon.co.uk/Smartphones-beyond-Lessons-remarkab...

When I say detailed, I mean detailed... David Wood seems to have copies of every email and memo he ever wrote when he was there... and he doesn't hold back when it comes to sharing them.

Required reading for anybody seeking to build a platform business.


Find the post in that article by Dennis May. It explains Nokia. The atmosphere in Symbian/Nokia towards the end was really bad.


El Reg has an in-depth review of the book, FWIW.

Here is the single-page view:

https://www.theregister.com/Print/2014/09/12/blockbuster_boo...


Well, he was a good talker and he seemed a very decent man but I'm not sure he can be divorced from the problems that it had.


None of the above. AFAIK, 'modular' here is shorthand for '(mostly) assembled on site from modules made in factories'. The idea is that if you can transform nuclear build-out from a civil engineering problem into a manufacturing problem you can massively lower costs if/when you reach some level of scale.


That was the same song Westinghouse was singing. It ended catastrophically.


No it didn't.

"Westinghouse Electric Company would file for Chapter 11 bankruptcy because of US$9 billion of losses from nuclear reactor construction projects. The projects responsible for this loss are mostly the construction of four AP1000 reactors."

"As of 2019 all four AP1000 reactors in China are operating."

Westinghouse made a large amount of mistakes in their designs, suffered through political climate because of the Fukushima incident and economic hardship because of the ridiculously low gas prices.

How can you reduce that basically to that they tried to mass produce power plants, that was dumb so they failed?

And where's the catastrophe?


China has ditched the AP 1000 technology now. They are building no more of those reactors. So even in China it was a relative failure.

I'll add that there's good reason to think the data from China about nuclear projects being completed on time is invalid. There are cases where at the official start date for construction on some of their plants there was already much visible work that had been completed. Great way to be on schedule, just delay when you say you actually started.

The catastrophe was the financial implosion of Westinghouse and the great damage it did to Toshiba.


I won't dispute that Westinghouse was a failure. They failed at designing a power plant that meets safety criteria for it to be build in the US. Toshiba bet they could do it efficiently, and lost 9 billion on that bet.

None of that has anything to do with Rolls-Royce. There's no reason to assume just because they're trying to solve the same problem that they're going to make the same mistakes and fail as well.

They very well could, and even if they did, your comment would still be useless.


What it has to do with Rolls Royce is that Westinghouse claimed they were going to get cost improvements from modular construction. But they utterly failed at that. The takeaway is that claiming you are going to reduce costs, and actually doing so, are very different things. Talk is cheap, especially without a history to show the words can be relied on.


I received one of these emails but, interestingly, whilst the email was about my personal blog, the email was sent to my work address, which is not listed on my blog. Implying they did a bit of manual work to figure out how to reach me? If so, I wonder whether whatever they did is itself covered by CCPA or GDPR? Idly considering whether to should send them a request that is near identical to the time-wasting, deceptive email they sent me?


My team has built Conclave that might be interesting. https://docs.conclave.net. The idea is 1) make it possible to write enclaves in high level languages (we've started with JVM languages), 2) make the remote attestation process as seamless as possible.

The first part is what most people fixate on when they first look at Conclave. But an equally important thing is actually the second part - remote attestation.

The thing a lot of people seem to miss is that for most non-mobile-phone use-cases, running code inside an enclave is only really valuable if there is a _user_ somewhere who needs to interact with it and who needs to be able to reason about what will happen to their information when they send it to the enclave.

So it's not enough to write an enclave, you also have to "wire" it to the users, who will typically be different people/organisations from the organisation that is hosting the enclave. And there needs to be an intuitive/way to for them to encode their preferences - eg "I will only connect to an enclave that is running this specific code (that I have audited)" or "I will only connect to enclaves that have been signed by three of the following five firms whom I trust to have verified the enclave's behaviour".. that sort of thing.


Is a user necessary? I feel like one thing I'd use an enclave for is as a signing oracle for service to service communications.

Like I have service A and service B. A is going to talk to B, and has some secret that identifies it (maybe a private key for mTLS). I'd like for A to be able to talk to B without having access to that secret - so it would pass a message into the enclave, get a signed message out of it, and then proceed as normal.

Would that not be reasonable? Or I guess maybe I'd want to attest that the signing service is what I expect?


> Or I guess maybe I'd want to attest that the signing service is what I expect?

Exactly. If you have a threat-model where you want to limit access to your secrets from a limited code path, you need to attest that only specific, signed code is running within the enclave that can access the secrets. You might only need this to satisfy your own curiosity, but in practice it probably is something you need to prove to your internal security team, third-party auditor, or even direct to a customer.


Got it, ok. Yeah, I think that's reasonable, though I do also think that it's "extra". I would consider moving the key to the enclave without attestation to be a win, though I very much like the idea of having that level of authenticity as well.

Thanks for clearing that up.


I may not be fully understanding your scenario but it sounds like A needs to prove to B that it knows a secret but doesn't want to actually reveal/send the secret to B?

And the idea, therefore, is that A sends the secret to an enclave, which inspects the secret and, if correct, signs a message to say "I, the enclave, have verified that A does indeed know the secret". (Apologies if I've oversimplified or got this wrong).

But assuming the above is roughly correct then, without remote attestation, you have a problem, and it comes down to the question of who's running the verification code, I think.

If A is running the checker, why should B believe what it says? If A is running the code, they can just change it so that it signs the statement irrespective of whether it's true.

But If B is running the checker, then A will have just sent their secret to a service run by B, violating the requirement that A doesn't send the secret to B!

You could ask a third party to run it of course. But if you don't want to introduce that third party then this is where remote attestation comes in:

If A is running the checker in an enclave then RA allows B to verify that the "A knows the secret" message really did come from a codepath that has actually done the right thing. In this scenario, B is the "user" of the enclave from the perspective of reliance on Remote Attestation. (Aside: I know it's weird to think of an actor that doesn't interact with a system to be a user of it, so I'm probably using poor terminology when I say 'user'... it's more that, in this scenario, B is _relying_ on properties of the enclave such as its attestation)

And if B is running the checker then RA allows A to verify that it won't just turn around and reveal the secret to B.


Almost but not quite. A needs to prove to B that it's allowed to talk to B. So there's a signing service in an Enclave that A can access. It passes a message in, the enclave signs the message, and A sends the message to B.

The secret never leaves the enclave.

The goal here is that if an attacker can execute code within A's operating system that they can not exfiltrate the password. They might be able to get the enclave to sign on their behalf, but that's significantly better than an exposed secret - simply removing the attacker from the box would be sufficient to remediate, vs having to rotate the secret.

To mitigate impersonation, I suppose one could do a number of things involving a second key, but I think that this simple version demonstrates the value of having a signing oracle. This is actually not an atypical approach, just not using sgx - I know companies that keep their signing keys in separate processes, which are mutually seccomp'd such that they can only pipe messages to each other for signing purposes of apps before publishing. But in the sgx case you have a much stronger guarantee than just seccomp.

So to me the only problem that attestation solves here is if the attacker is somehow in the SGX enclave, but actually the much more likely scenario is that they aren't, and that they just ask the oracle to sign on their behalf - because B can't verify that A is the one asking to sign. Given that there is a single entity deploying both the software in the enclave and the service that interacts with it at least, that seems to be the case to me - like, in this scenario A, B, and the software in the enclave are all deployed by me, barring malicious action to interfere with that - but again, the most likely scenario is the attacker just owned the box and has a regular user on there.

But... also also, A can prevent impersonation via tricking the oracle by having another keypair shared between it and the enclave, and then it becomes a matter of protecting that memory from an attacker who can almost certainly scrape your memory - a hard thing to do.

So you end up with:

System 0, A: Key 0 System 0, Enclave: Key 1, Key 0 System 1, B: Key 1

Key 0 is used to 'auth' A to Enclave. Key 1 is used to auth A to B (via enclave oracle).

This is just my perspective on it.


That's helpful - thanks.

We don't presently support it in Conclave but SGX (which we use) does, I believe, support the idea of, in effect, packaging up a secret in a program and then encrypting it so it can only run in an enclave and hence keep the secret safe even when running on malicious hosts. I'd need to check but I suspect you're right that there are situations there where RA isn't required.

But to take your specific example (and maybe I'm still misunderstanding), does your scheme actually work in practice? Let's assume a simple model where the enclave runs on A's machine. So we can assume that requests to sign something come from A. This avoids us having to worry about A having to authenticate to the enclave, which just leads us to a circularity (how does A protect the key it uses for authentication, etc)?

And now we introduce the attacker, as in your scenario. As you say, eliminating the attacker removes their ongoing ability to interact with the enclave, since it expects to communicate only with locally running processes.

Except... if the attacker is on your box, they could simply take a copy of the enclave! And simply run it on their own machine. It's possible SGX contains the ability to lock an enclave to a specific CPU, in which case your scheme seems like it should work (to my untutored eye... I lead the Conclave team but am by no means an expert)... but I'm not actually sure it works that way. I'll look in to it.


Just checked... yeah... you can arrange so that an encrypted enclave can only run on a specific machine through careful use of SGX primitives. So I think your idea would probably work.


> Except... if the attacker is on your box, they could simply take a copy of the enclave!

Yeah this is the part I'm assuming isn't possible, perhaps out of ignorance. I believe that, at least in SGX's case, this is possible because SGX exposes per-CPU keys, and the ability to derive secrets from those keys. So if you moved the enclave (I actually have no idea how moving an enclave works either fwiw) it would no longer be valid.

But yeah, this all kinda goes to "I have no idea what I'm doing with enclaves" lol, this is just the use case I have - keeping a secret stored in one so that an attacker can not exfiltrate it.


Hi there - Richard Brown, CTO of R3 here. We support and maintain Corda, an open source permissioned blockchain (that is not currently part of Hyperledger). Brian has done a good job of putting Hyperledger's case and I agree with him in a lot of his responses. But I wonder if there's a broader point to be made too.

One of the key questions being asked here, it seems, is: what problem (if any!) do permissioned blockchains solve? Put another way: if you're not trying to build a censorship-resistant, decentralised payments network, for example, why do you need any of this stuff?

To be frank, where I think _some_ private blockchain platforms have gone wrong is that they have never satisfactorily answered this question. Instead, there has been a leap of magical thinking from "Bitcoin is amazing" to "let's cargo-cult some of the same ideas and use them to solve (unspecified) problems in business." This is not a critique of Hyperledger btw... It's true I've made some high-profile critiques of Hyperledger Fabric's design choices in the past but that isn't my point here. Indeed, I'd hope most of the Hyperledger community would agree with this post.

The thought process we went through when we designed Corda at its most simple was: how can we build a system that allows parties who wish to transact (but who don't fully trust each other) to maintain accurate shared records of their dealings with each other without reliance on a third party?

I know that _sounds_ either vacuous or trivial but I really don't believe it is. I still don't think I've written a totally satisfactory description of my vision but the article linked to below probably comes closest (scroll quickly past the shilling I do for our commercial distribution at the top and tail... the meat is in the middle!)

https://medium.com/corda/markets-are-decentralised-and-the-s...

tl:dr of that piece: firms who transact with each other in the real world - with paper and phone calls and faxes - don't need a centralised third party toll-taker to manage and record their transactions for them. So why is our instinctive response to introduce one when those same firms decide to transact electronically? Whether it's a centralised database (who runs it? what do they charge? what power do they have? who holds them accountable? what happens if it goes down?) or a formally constituted infrastructure firm like in the financial markets, adding a centralised entity where previously there were none feels like inadequacies in technology forcing industries to change their market structures. Tail wagging the dog.

And yet, without such a thing, you end up in a total mess with each firm holding and maintaining their own records, which are invariably out of sync (out of consensus) with their counterparts.

A key problem we try to solve with Corda is to enable trading partners to connect their applications to each other in a way that ensures they're in sync at all times. And this requires far more than just conveying data from one side to the other... it's ensuring it's interpreted and processed in the same way - deterministically, in the same order. Indeed, the abstract for the Corda technical whitepaper actually calls it a "decentralised database" (note: not "distributed database").

This takes you into a completely different design space where many (but, crucially, not all) of the same principles underpinning public blockchains are needed... but where requirements such as transaction finality, strong identity, interop/integration with existing systems, reuse of existing codebases, developer productivity, ability to deploy behind a firewall yet connect to peers across the internet, etc., come into play.

The result is that platforms like Corda look similar to public blockchains on one level (chains of transactions, deterministic execution of business logic, digital signing of transactions, decentralised consensus algorithm to confirm and order transactions) but also very different (no crypto economic incentive, settlement finality, runs on the JVM so the world's 10m Java developers can use it easily and so on)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: