Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Cyph – Encrypted chat in 30 seconds (cyph.com)
39 points by buu700 on Nov 25, 2014 | hide | past | favorite | 43 comments


Yeah... this is a really dangerous and fundamentally flawed idea. Anyone with a basic grasp of cryptography will realize how obviously broken this is:

1. It uses JS encryption, which is essentially useless as you could modify it any time to intercept the messages.

2. You claim to be using 'OTR' but there is no key verification, which means it is trivial to intercept. Therefore there is no point at all to using OTR. In addition the point of OTR is to prevent server operators from intercepting communications, which you are still able to do.

3. You've rolled your own crypto library, which hasn't been audited and isn't even known to work correctly or interoperate (i.e: with XMPP OTR clients) with verified implementations.

4. You're using cloudflare which means cloudflare also holds a copy of your TLS key, which means they could also intercept the messages.

If you get hacked, if your TLS key gets broken or if one of your employees wakes up on the wrong side of the bed, it's game over. In a nutshell, this is marginally more secure than logging into facebook or gmail over HTTPS and sending each other. In fact in some aspects it's probably less secure because Facebook or Google have strong security teams to monitor intrusions into their servers.


Sorry about the confusion; should have some details up on the site soon!

1 and 2) See https://news.ycombinator.com/item?id=8660233 — acknowledged that it's not perfect or 100% trustless yet, but we have some ideas to address that in the future, and for now it's still better than nothing (which is important in cases where you'd have otherwise used nothing).

3) We haven't rolled our own crypto. We're using the JS OTR implementation recommended by cypherpunks.ca (which hasn't been formally audited as far as I know, but is understood to be pretty good).

4) Not using CloudFlare for SSL or CDN, only for DNS management. Our CA is DigiCert (same CA as Facebook and GitHub) and we're running on Google App Engine; it'd be a pretty big deal if either one of those were hacked.


1. This is not about being perfect, this is about being trustworthy or secure. Right now this is very dangerous to be promoting as even remotely private, as you claim on both your website and twitter. This is no more secure than chatting over facebook. There is no 'clever solution' to get around crypto in the browser, it's always bad.

2. There is absolutely no point to using end-to-end crypto in this setup because it can all be intercepted. Not to mention there is no key verification, so even if you're JS wasn't interceptable it's still trivial to intercept messages. It's a red herring at best and at worst deceptive .

3. I stand corrected, although JS (regardless of whether it's in the browser or not) is usually considered terrible for crypto implementations because of it's math bugs.

4. You might want to check yourself, https://www.ssllabs.com/ssltest/analyze.html?d=www.cyph.com&... you're pointed to cloudflare's SNI server.

I think what you fail to understand is you're relying entirely on TLS for all of your security, which means a LOT of different people are able to read these messages. It's not just your CA that could be compromised to hack you, its ANY CA. So for example the chinese government decides they want to read everyone's messages, all they need to do is publish a cert with your domain. Or any of your staff. Or anyone at cloudflare. Or anyone with access to your server. Or access to your home computers which have login credentials.


4) We're using CloudFlare to redirect our naked domain to www.cyph.com.

Re: the rest, assuming the user trusts that we haven't served up malicious code (which, again, I acknowledge isn't completely trustless for now), I disagree that JS crypto is bad/useless. It definitely was bad/useless at one point in time, but with the introduction of the Web Crypto API I haven't seen a convincing argument that it's still strictly inferior to C/C++ crypto. (And, to be clear, Cyph will refuse to run in a browser without Web Crypto API support, e.g. any version of IE before 11.)

I'd argue that a safe language like JavaScript running in as well-vetted a sandbox as in modern browsers has a lot of pretty favourable security properties (entire classes of exploits are impossible here, as opposed to in Pidgin/OTR whose implementation is known to be less than ideal for security[1]).

1: http://motherboard.vice.com/en_ca/read/secure-messaging-migh...


4) So they can redirect all users to a non-HTTPS site. That's interesting.

What youre saying is ridiculous, it's nothing to do with "completely" trustless, you're not trustworthy at all. The "we promise we wont serve malicious code" is the same securi

Basically your users have to trust: -You -Your staff -Cloudflare -Google -Any computer used to design, build or login to your web app -Anyone with access to any of these computers -Every single CA in existence (including the US and Chinese government) -Anyone with resources to spoof a CA SHA-1 signature (anyone with ~$700,000 [0]) -The authors of the JS OTR library and any other code you use

Do you see the problem here? There are literally millions of people in your trust chain.

Libpurple sucks, but browsers are the #1 target for exploit writers, not to mention the HUGE attack surface they pose. There are just so many angles of attack for your service. Anything hosted on a shared server is not secure. The word cloud and security should not be in the same sentence. No offense, but you guys really do not seem like cryptography experts and there are so many pitfalls here that I would not recommend trying to promote this until you completely re-design everything.

As far as WebCrypto.. I'm just going to leave this here: http://tonyarcieri.com/whats-wrong-with-webcrypto http://matasano.com/articles/javascript-cryptography/

[0] https://www.schneier.com/blog/archives/2012/10/when_will_we_...


Explain to me the trust chain of a crypto messaging app that you DO think is safe and secure. Seriously, pick one, let's talk about it. Let's compare and contrast.

Or would you prefer to parrot the same tired old arguments that pop up on the HN comments every time the concepts "web browser" and "cryptography" are used together?


Here, I'll speed it up for you with this conclusion:

Unless people are downloading from source, verifying signed checksums, and building their crypto applications from source they are just as susceptible to any of the issues that browser crypto is.

Prebuilt executable? Can't be trusted. A modified version could have been substituted at download time for a particular user.

Web site where signed checksums are published? Can't be trusted. It could have been hacked or altered.

Checksum embedded in executable which checks itself at run time? Can't be done.

Trust is the issue. Trust NEVER goes away. The only way to know for sure is if you compile all your own code. But wait, remember Ken Thompson on Trusting Trust? You had better be building your own compilers as well...

So basically you're saying that no one but computer experts who build their entire toolchain from scratch should be using any sort of application that uses cryptography.


Actually that's not what I'm saying at all. I'm critiquing the implementation of this for several reasons:

-The JS crypto can be trivially MiTMed. Your approach of checksumming the JS you outlined in another comment using external servers is laughably terrible

-The OTR is pointless. The keys aren't being verified so, again anyone could MiTM the key exchange and trivially intercept messages, even if there was no JS. This is the main point of the app, and it's completely broken.

-The entire application is relying on the security of the TLS PKI, which is horribly, horribly broken. This opens it up to attack by a huge number of people.

-It's being run on a shared server, with access to the keys by the real owners. Not to mention VM memory-leaking attacks by other residents, etc. Having your entire application logic hosted online and delivered to your users every time they access the service has security ramifications, whether you'd like to believe it or not.

So basically what YOU'RE saying is trusting 1 developer and trusting millions of people is the same because at some point you cannot verify every bit in the executable. Using browser crypto is worse than executables for the same reason PFS is better than non-PFS, you can't retro-actively hack or MiTM something. If both parties already have the executables stored locally (or delivered securely by an offline medium like USB stick and verified) the only way to intercept the data is compromise both end-points.

TL;DR your hyperbolic "trust as slippery slope" argument is dumb


All I'm doing is defending use cases for the Web Crypto API.

Your arguments are just as hyperbolic, BTW. Meeting up in person with USB sticks for software updates? And questioning the security of TLS?

And is JS checksumming more or less laughably terrible than publishing a checksum of an executable on a web site?

And does your JS MiTMed scenario depend on broken TLS?

You're going to need to do more than just call something "laughably terrible".

I'd prefer if you took the conversation about JS checksumming to the specific thread so as to not complicate things.

Software will always have updates and they happen automatically with increasing frequency on consumer computing environments like iOS, Mac and Windows.

BTW, just who are you anyways and why are you so intent on making me look like a fool for attempting to discuss viable web-based cryptography? I see that your user account is brand new.

Why would anyone be pursuing the Web Crypto API? Have you broached these issues with the W3C working groups? Don't you think they'd like to know that their work is futile? Shouldn't some manager on the Chrome team have stopped them from implementing such a pointless protocol?

And no, I don't think "trust as slippery slope" is a stupid argument because we have to deal with that issue every day with every aspect of consumer computing. With all of the security holes in Windows, TLS, OpenSSL, SSH and in practically every other day-to-day usage of crypto... yet somehow people can still safely and securely do all sorts of things. There is a spectrum between "totally secure" and "usable by normal people". Your refusal to acknowledge this spectrum is asinine and attacking me for mentioning it is uncalled for.

Wanna talk about trust? I don't trust you and I don't trust your approach to these arguments. I don't trust your faux-elite sounding hacker nickname. Maybe you or who you work for is threatened by the average person having end-to-end encryption in web applications? Maybe you're a marketer who needs that data. Maybe you're the NSA. Maybe you want to come across as an all-knowing crypto guru and charge obscene hourly rates to companies who you've scared shitless. Maybe you're just a troll armed with the same tired old arguments I've been hearing for years. Who knows. Again, you're using a brand new account. No one knows who you are or what your agenda is... "trust as slippery slope" indeed...


I find it hilarious you're still fixated on the idea of the trust chain and haven't addressed any of my actual points about the crypto, besides making a lot of hand-waving and saying "well it's not secure, BUT NOTHING'S SECURE SO THAT MEANS IT'S GOOD ENOUGH". This is not end-to-end, the OTR is broken and doesn't verify keys, end of discussion. The JS can be MiTMed, end of discussion.

>And questioning the security of TLS?

No, I'm questioning the TLS PKI which is what this is dependant on, please learn the difference. Currently any CA or anyone who hacks a CA will be able to make a fake cert, which has happened many many times. Everyone from Bruche Schneier and Moxie Marlinspike condemn the CA PKI, in fact according to Schneier it will cost only ~$700,000 by 2015 to make a SHA-1 collision and produce a fake certificate without any hacking at all. This is why real security solutions use pinned certificates.

Blindly putting your faith in SSL/TLS to accomplish all your goals is really, really dumb. There have been several MAJOR (CRIME, BREACH, POODLE, Heartbleed, CCS) attacks against it in the last 3 years alone.

>You're going to need to do more than just call something "laughably terrible"

There's a reason no one else does this, because it's insecure and unsustainable. Someone could just detect your monitoring servers requests and serve it a 'good' version. Or better yet, only serve a 'bad' version to the people it wants to target. Or just DDoS your monitoring server. If they have the capabilities to intercept the TLS requests to Cyph then what's stopping them from intercepting the requests to your extrnal server? I think what you fail to understand is the interception would probably be client-side, not server-side. Not to mention every time your code actually changes you have to go manually verify that it's correct and not an attack. You're essentially hand-rolling your own signature scheme based on extremely shaky ground. You're going to need to do more than just conjuring up your own amateur solutions and dismissing all security concerns out of hand because "doesn't have to be perfect".

>There is a spectrum between "totally secure" and "usable by normal people". Your refusal to acknowledge this spectrum is asinine and attacking me for mentioning it is uncalled for.

Where did I do that exactly? You're also implying something can either be secure or useable. I acknowledged there is a spectrum of security, like I said earlier this is marginally better than talking over Facebook. In fact Facebook might be a little more secure because it has a dedicated security team that monitors intrusions. Ignoring the JS problems and the hosting problems, the end-to-end crypto doesn't verify keys, which makes it useless. It's like using a self-signed cert for your message encryption. If that's your line in the sand for "secure enough" than go right ahead and use this, but marketing it as private is irresponsible and deceptive.

>With all of the security holes in Windows, TLS, OpenSSL, SSH and in practically every other day-to-day usage of crypto... yet somehow people can still safely and securely do all sorts of things

Yes, because they were lucky or oblivious to the fact they got owned. This is like saying "I leave my car doors unlocked all the time and it hasn't been stolen yet"

>Maybe you're the NSA. Maybe you want to come across as an all-knowing crypto guru and charge obscene hourly rates to companies who you've scared shitless. Maybe you're just a troll armed with the same tired old arguments I've been hearing for years. Who knows. Again, you're using a brand new account. No one knows who you are or what your agenda is.

Ad hominem mixed with a little tinfoil, I like it. Maybe you and Cyph are the NSA and are in collaboration to promote broken crypto to the masses? Or does the fact that you self-aggrandize by using your real name on HN make you legitimate? Maybe you're a design-centric know-nothing who thinks that WebCrypto means "insta presto! the cloud is secure now! web 4.0 everything because 'it just works'" ?


Again, let me repeat this because you seem to be totally deaf: All I'm doing is defending use cases for the Web Crypto API.

I'm not and have never defended this specific app. All I'm doing is pointing out that your wholesale dismissal of browser crypto is unwarranted.

You're clearly just trolling me now because you've refused to answer any of my questions specific to the valid use cases of browser crypto and how ANY software that is updated or updates itself is susceptible to issues you've levied at browser crypto.

I've been reading infosec arguments on the net since the 90s. I've heard everything there is about the dangers of creating your own crypto systems and every one of your snide remarks. tptacek has made a career off of the same 15 things you've said over and over and over again. I get it.

I also completely disagree with tptacek and every other dude who outright dismisses browser crypto. I also thing that there are a lot of important benefits from attempting the very difficult task of creating secure browser crypto environments.

Your general approach to the issue is to turn in to a raging asshole and attempt to make anyone who is exploring these issues look like a fool. You engage in an incredible amount of personal attacks and you attempt to justify the means by harping on about the importance of the ends.

I fundamentally disagree that the means can and ever will justify the ends for anything. When you do that what we're left with is assholes like you who make the world of crpyto and security an environment that is incredibly toxic to creativity.


No, saying snarky things about crypto is my hobby, not my career. My career is actually breaking these stupid systems.


Sorry, I was actually referring to your career as all-time HN karma leader! I have a lot of respect for your ACTUAL career, don't get me wrong.

While I have your attention, could you take a look at this: http://substack.net/offline_decentralized_single_sign_on_in_...

BTW, there's enough of us idiots out there that we're not going to stop until we've figured this out! Being able to ship a decentralized version of say Facebook or Twitter that runs in the browser and allows people to manage their own private keys (aka, manage their own digital identity) is incredibly motivating. I don't trust platforms like iOS or Android to not shut down certain kinds of apps and I don't think my mother will ever be running an open-source OS on her tablet.

I have to say that I definitely lost my shit in this thread... and I'm sorry. I have a tendency to get emotional about this kind of stuff because the politics are a big part of what motivates me to build these kinds of things. I take these hard-lined stances against browser javascript as an assault on my principles for digital identity but also as a creative individual who strives for a positive and constructive environment.

That doesn't justify me calling people names. Especially if I'm going to be talking about the means not justifying the ends... ugh...

However, I do think that articles like this are very close to being out of date: http://matasano.com/articles/javascript-cryptography/

The Web Crypto API alleviates a number of these issues and novel uses of application caching manifests seem to solve the issue for good.

Check out this article, especially Gotcha #4. http://alistapart.com/article/application-cache-is-a-doucheb...

So in this case we've got an authoritative article that says "never ever far-future cache the manifest". And we've got an authoritative article that says "never ever do crypto in the browser".

Both articles take a pretty heavy handed approach and don't really set out to explore the deeper possibilities. They both kind of set out to create an environment where readers will also not question these principles for fear of being made look like a fool in front of their peers.

But the fundamental hacker ethos is to read articles like this and say "the intent of the APIs and authors be damned, I have other considerations!".

Anyways, I'd really appreciate your thoughts on this stuff... all of, including what I and a number of other people perceive as a toxic environment for creativity related to cryptography and infosec.


If you want to be creative with cryptography, put in your time first actually breaking systems. The world is full of poorly-implemented crypto code. Go search Github for Python, Ruby, and PHP crypto. File bugs. Every crypto primitive available to you has misuse cases that you can find in actual deployed software. If you can't exploit them, you're not ready to use those primitives yourself.

One place I have put my own time in is in teaching people how to do this. This site:

http://cryptopals.com/

... was preceded by more than a year of individual person-to-person email-based tech support in which thousands of people got several sets through these challenges, and almost a hundred finished.

I have zero sympathy for the people who are unwilling to put that time in, have their feelings hurt when they stumble trying to get their own systems working, and then complain of a "toxic culture" in cryptography. That "toxic culture" takes its place alongside the toxic cultures of nuclear controls system engineering, gastroenterological surgery, and appellate court law.

I have no idea whether this describes you or not, because your comment started out by asking me to look at a technical problem, then attempting to rebut a statement I didn't make, then invoking the fiasco that is the Web Crypto API, before finally complaining about toxic cultures. I don't know where to start, and so I'm hitting the eject button.


I apologize for the lack of focus in my comment.

What do you think about that appcache solution to controlled updates?


Wow.


I've read this exact same response from you probably a dozen times over the years.

You're convinced that the means justify the ends.

It is my opinion that even in nuclear controls system engineering, gastroenterological surgery, and appellate court law that being positive and supportive of your peers is probably a better approach than however you want to justify having zero sympathy for people.

Also, in my case I happen to be interested in creating something as life-threatening as a product for posting photos of cats to your friends all while having control over your digital identity/keys. I'm interested in the ecosystem of software applications that are based on open platforms and protocols like the web and driven my individual ownership of identities and data. It seems like a fun, liberating, creative and social way to make software!

I'll also hit the eject button because it is clear that you're only interested in conflict.


Not replying on whether or not Web crypto is effective because you and William seem to have that discussion covered pretty well (and honestly it's too long for me to read right now), but you're right that we should have had an option to authenticate the OTR; thanks a lot for pointing that out!

I just deployed a fix that I think is actually pretty clever (haven't seen anything similar done anywhere else, anyway). Rather than intruding on the existing UX, I'm just automatically generating a shared secret on the first client and putting it in a fragment at the end of the generated URL, which is then used by both clients for SMP. So now, to successfully mitm a Cyph chat server-side an attacker would have to have compromised both our server and the users' SMS/email/etc., and even then they'd still have a pretty small window of time in which to execute their malicious code on the Cyph backend.

It's not strictly foolproof, but it's still a hell of a lot better than leaving unauthenticated as the default. Combined with the two-factor auth feature we're rolling out in a few weeks, this should put us in a pretty good state. The only (mildly) inconvenient thing about this solution is that it precludes us from adding buttons to automatically text/email the link to your friend for you (since that would involve passing the secret to our server).

re: checksumming, the plan is to do it client-side, not server-side (I could have misread/misskimmed but it sounded like you were mistaken on this), and have the user manually agree before applying any updates to local appcache. The idea is essentially to cram a traditional release cycle into the Web, with a stable channel with biweekly formally audited known good payloads. Still need to prototype this and see if it's feasible, but on paper it sounds good to me.


Those problems are quite serious. A positive step is that they've been increasingly recognized as serious by people who develop and publish software, so there are significant transparency efforts underway now to try to make it possible to meaningfully tell where software came from, whether it corresponds to published code, and whether it's the same thing everybody else is using.

(Along with Mike Perry from the Tor Project, I gave a talk about some of these issues at Mozilla recently, which was pretty well-received. Tor has been very interested in making sure that compromises of their infrastructure won't compromise the software their users get, and that the developers can't be coerced into giving users backdoored versions; see Perry's "Deterministic Builds Part One: Cyberwar and Global Compromise", for example.)

To look at this from another angle, all of these issues are terrible gaps in software security, and terribly exciting opportunities for people to add backdoors (maybe more likely bugdoors), but now we have some tools and ideas that can start to address them!


Great!

Can we now stop pretending that browser crypto is the only thing is susceptible to these issues?


Thinking about the answer to this hurts my brain. Ultimately, with the amount of money and resources available to the "bad guys", I can't imagine something as truly uncrackable.

But I am just a lowely infrastructure guy that builds VPNs, lots of smart folks out there :)


Oh, so an argument to authority?

First off, that Matasano article was written before the Web Crypto API and is out of date.

Second, Tony's article does make a good point... can you trust that the web site is going to be serving up code that won't spy on you?

Well, can you trust that ANY code on any platform won't spy on you?

People offer that Javascript doesn't have code-signing. Well, that's fucking bullshit. All that needs to happen is someone needs to just, sign all the checksums of the code served to the browser and publish it somewhere. You can then have a third-party site verify that the code is still what it says it is.

Personally, I use this pattern internally. I run my server-side APIs on a different host than my static front-end code. My server-side API then checksums the front-end code and makes sure it is legit before proceeding in certain situations.

Your arguments are weak and out of date.


I tried to submit an "Ask HN" item (motivated by this debate today) about what kinds of tools and best practices there are for doing this, but it didn't get voted onto the front page.

I'm concerned about the "third-party site verify that the code is still what it says it is" if an important part of the threat model is that an attack would be deployed against only some users, not all. How can the third-party site tell if the site is attacking an individual user? How can the individual user compare the version of the web app they received with the version that the third-party server thinks is appropriate or current?

I don't mean to dismiss this goal, because I really want infrastructure like this to work and become ubiquitous, I just think it might require a slightly more elaborate architecture with the addition of some kind of client-side component, if you're including the original developer in your threat model, and not just someone compromising the servers that host someone's copy of jQuery and giving everyone a backdoored copy.


Let's talk about targeting specific users in the scenario where you've kept your server-side API separate from your static front-end code.

The compromised front-end code server would need to know that specific users host and client in order to target a specific user. Users are logging in to the server-side API. They all get the same front-end code.

If a third party site tested from a variety of hosts using a variety of spoofed client information it would be very probable for it to detect code that targeted broader user fingerprints.

You're right in that this wouldn't protect against targeted individuals. But neither does an executive order from the state.

Web applications are an orchestration of two separate pieces of software that can work together using multi-factor authorization and signature schemes to offer a substantial amount of enhanced security when client-side crypto is properly utilized.


In this threat model it seems unreasonable to me to rely on a particular front-end/back-end separation architecture as a main thing that protects users. If the two are not separated this way, or the back-end can cause redirects or can inject Javascript at all, the users won't be protected.

I like the "test from a variety of hosts" approach better, and I wonder what kinds of tools exist or will exist that will facilitate this -- and whether some web app developers are willing to commit that their architectures will support it (that the front-end code doesn't do an eval, that the front-end actually architecturally treats the back-end as untrusted, and that users can expect to get consistent page contents every time they visit the service, even with a different user-agent or from a different netblock). I'm not trying to make fun; I want to see this happen. It seems like a big change from existing practice!


The front-end/back-end separation architecture that I'm referring to is used for internal systems checksums. We're using it to make sure that the front-end code is still what we think it is... the main threat model it protects against is a front-end developer on the team having their machine compromised and an attacker deploying front-end code that wasn't signed by someone with access to the back-end code.

What I like the most about browser security is that it tackles this stuff head-on. There is no more thoroughly tested environment that runs untrusted third-party code. Code signing, checksums, signed software updates... all of these are most definitely issues on every other platform. They're the most difficult to protect against on the web which makes the web the best place to try and figure them out.

I'd really like to see a few additions to the Web Crypto API. Some sort of key management mechanism would be wonderful. And of course some sort of native mechanism to deal with signed code and checksums. As well as getting FIDO U2F standardized.


Totally novice to security here, but curious to the answer. What if they keep cyph as it is with a warning that this is in browser javascript insecure. They can demo the functionality, then if people want to continue using it (more securely?) they provide a chrome extension to be installed which is sandboxed and maybe also the client-side code is published/signed with (for example) keybase by the developers for additional security improvements?

On Cyph: Really dig the look and feel of the app, gorgeous!

[0]: https://keybase.io/warp/warp_1.0.6_SHA256_e68d4587b0e2ec34a7...


Thanks!

This is our first step, but we have a lot more interesting things in the pipeline in terms of security features.


This looks cool really cool! I like the site, however it can be hard to read the text with the flashy background. Perhaps have the flashy background scroll up with the first "panel"?

On the crypto side of things, are there any plans to create native clients? While I don't doubt that you guys are trustworthy, it would be nice if I didn't have to trust that the javascript I was being served didn't change between visits.

All in all it looks good, albeit with some trust issues...


Thanks!

We have some ideas about solving the trust issue — https://news.ycombinator.com/item?id=8660233 — but yep, native apps are definitely on the way.

Still playing around with the site, I'll set aside some time to dig into readability issues (I was noticing the same effect at certain parts of the video).


I would like to share the link on an iPad. But I guess you were too smart. When I tap on the link to select and copy it, some JS code autoselects the whole link. This is a nice idea and should save me some work. But the copy mode on the ipad breaks. So I cannot copy the link and share it.


Well that sucks; thanks for letting us know!

Chrome and Android have definitely gotten the most attention so far, though this is the first I'd heard of iOS having issues with that. I had run into a similar issue on Windows Phone (and successfully fixed it), but I'll have to revisit iOS.

As a temporary workaround, does it work when you try a second time? On mobile it should stop trying to grab the link for you after the first time.


I don't now how secure this is, but the look&feel of the site is awesome. If you happen to fail in this project, I encourage you to try another one, since you clearly have talent to make really cool web apps.


lol, thanks! We're definitely trying to make Cyph the dopest chat app out there ("dopest encrypted chat" just doesn't sound like a terribly high bar to me).


Could the designers indicate what would be the class of entities that this is supposed to protect my chats from ? the guy on the same wifi ? isp ? routers on the way ? govt entities ?

In a number of threads it was claimed that this is better than nothing or better than https, could you (all) elaborate on why that would be so. If I am just locking one door of my car leaving the rest open, I sure would like to know and at the moment this is what it sounds like but I am quite the ignoramus so please correct my misapprehensions.

EDIT

@buu700 Thanks for replying.


Sure, we can clarify that on the site. (Thanks for the feedback! It's become pretty clear that our supporting content/documentation needs a lot of work.)

The intent isn't to protect from certain classes of entities, so much as to absolutely protect with a certain degree of confidence. That is to say, Cyph is a convenient tool that probably keeps your secrets from, say, China; but if you're in a life and death situation you should consider carefully whether you have better options.

That being said, now that I think about it, there isn't an obviously better alternative in such a circumstance that I could recommend. Pidgin OTR is probably what I would have chosen a few years ago, and for now it's maybe still what I'd pick, but based on what I've been seeing in the OTR mailing lists and the Vice article I linked in another comment, it's really not a great option. TextSecure might be a good alternative if using it on a phone isn't too big a problem. There's always tried and true PGP, though where applicable the OTR protocol has much more interesting security properties IMO.

tl;dr: There's really no silver bullet, but Cyph has as good a chance as just about any alternative of protecting your secrets. I wouldn't stake my life on it quite yet, but I wouldn't stake my life on any existing alternative either. If things go as planned and we execute well on the next steps, it shouldn't be too long before I'd consider Cyph strictly better than any alternative.


Really cool execution. Love the cyphertext view. The key exchange seems a little non-secure given the length of the sharing URL?

How does it work?


Thanks!

As far as the crypto, right now it's just straight OTR (https://arlolra.github.io/otr/), but we have some cooler stuff in the works for non-intrusive non-ephemeral encryption.

I think the way we handle the URL mitigates a lot of potential issues (the design goal there was to make it as resistant to someone brute forcing the address space as possible, within reason):

* It expires after 10 minutes, at which point it's immediately recycled

* It's also immediately recycled the second the other guy connects to the cyph (meaning that, hypothetically, two unrelated pairs of people could be cyphing each other at the same URL)

That isn't perfect on its own, but it's good enough for almost any use case.

That said, in a few weeks we're also rolling out a two-factor auth feature to verify your friend via her phone number or email address, for when you really need to be sure. (We'll flesh this out a bit more later on, e.g. possibly integrating with Google Authenticator.)


People can't trust that the site is actually serving the right JS crypto implementation. It can be broken at any time, which would allow the site to intercept communications.

It needs client-side code to do it correctly, for example, something like the e2e Chrome extension.


Agreed that it's not a perfect solution (you still have to trust that I'm not brazenly serving up backdoored crypto), but it's a lot better than nothing.

(And practically speaking, before building Cyph, Josh and I pretty much avoided having to use encryption except in cases where there was no alternative.)

But I definitely agree that we'll have to come up with a clever solution to this problem. Off the top of my head (haven't thought this through — could be totally stupid), we may be able to do something clever with html5 appcache and notifying users when the hash of the code changes (and giving them the option to stay on their known good version).

Of course, once we have our native app out this issue will be trivial to avoid for people who really need to.

Edit: To clarify, I should correct you on that we are definitely performing the end-to-end encryption from the client — anything else would kinda defeat the purpose of Cyph...


https://www.cyph.com/privacypolicy Your second to last bullet point is not really possible.


Ignoring the semantics of the phrasing (that's just a temporary placeholder), the intent is to say that we won't ever willfully send a decrypted copy of our users' encrypted data to our servers.


A court order is a simple thing, are you implying that the operators of the site are willing to go to jail for denying a legitimate order on a free service?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: