Hacker Newsnew | past | comments | ask | show | jobs | submit | extr's commentslogin

K2.5 is dog shit compared to leading OAI/Ant models.

The OpenCode guys have really surprised me in the way they've reacted to Anthropic shutting down the side-loaded auth scheme. Very petty and bitter. It's clearly just a business decision from Anthropic and a rational one at that, usage subsidization to keep people on the first party product surface is practically the oldest business move in the book and is completely valid.

This is not my impression, could you explain what you're talking about?

Ever since the shutdown of the side-load they've been pretty vocally anti-anthropic on twitter. Paranoid that anthropic is going to torpedo them via some backdoor now that they own bun, insinuating that anthropic shut down the auth from a position of weakness since OpenCode is a superior product, etc.

The thing is OpenCode IS a great product, I'm not sure it's "superior", but unfortunately the way things are evolving where the model + harness pairing is so important, it does seem like they are in a similar position to Cursor (and do not have the resources to try to pivot into developing their own foundational model).


I wouldn't call OpenCode a "great" product tbh. It's nice that it's FLOSS of course, but the overall quality is a bit underwhelming and it's clearly possible to build much better open agentic harnesses. It would be nice if more people tried to do this.

The OpenCode bun dependency is an unsettling issue I would imagine.

if you look at the last few weeks of commits, you can see they've been systematically ripping out everything bun-specific and moving to node

I think frankly OpenCode is delusional to think that Anthropic is actually "concerned" with them in any way. Anthropic's concerns at this point are on the geopolitical level. I doubt stamping out ToS-violating usage of their subscription services is even on executive radar. OpenAI only allows it because it's a cheap PR win and they take those where they can get them.

Opencode is not delusional, it would be delusional to think Anthropic won't after they have already threatened them.

Yeah, I recognized the PR author from Twitter (same avatar) and man he really does come across as incredibly juvenile. Shamelessly talking up OpenAI while shitting on Claude models and the motivation is just so transparent.

I have a huge issue 10416 on OpenCode

https://github.com/anomalyco/opencode/issues/10416

- their stance on privacy


not sure i follow - do they leak my information to their own servers by default?

This is probably the most exhaustive answer to your question as of Jan 7: https://github.com/anomalyco/opencode/issues/459#issuecommen...

The also leaked all prompts to OpenAI until very recently.


Why does Anthropic care how the tokens are consumed?

Valid question. It's because they have a separate product intended for use with general tools: Their API.

Their subscription plans aren't actually "Claude Code Plans". They're subscription plans for their tool suite, which includes claude code. It's offered at a discount because they know the usage of this customer base.

OpenCode used a private API to imitate Claude Code and connect as if it was an Anthropic product, bypassing the need to pay for the API that was for this purpose.

Anthropic has been consistent on this from the start. The subscription plans were never for general use with other tools. They looked the other way for a while but OpenCode was openly flaunting it, so they started doing detection and blocking.

OpenCode and maintainers have gone on the offense on Twitter with some rather juvenile behavior and now they're trying to cheekily allow a plugin system so they can claim they're not supporting it while very obviously putting work into supporting it.

Most of the anger in this thread comes from people who want their monthly subscription to be usable as a cheaper version of the public API, even though it was never sold as that.


Same reason movie theaters care about you not bringing your own snacks

You pay for snacks in the cinema and they lose money if you buy elsewhere. Where does Anthropic lose money when I use OpenCode?

This has been explained many times in this thread. Your subscription to Claude models for use in Claude Code is subsidized. That is, it is only meant to be used with that harness.

When you use that API key with OpenCode, you're circumventing that.


The AI companies can spare their whining about contempt of business model. They're selling a service.

That doesn't make sense.

The PS5 is subsidized because the make money with the games.

Printers are subsidized because the make money with the ink.

The API use is subsidized because they make money with Claude Code? I would understand if Claude Code could only be used with Anthropics API but not the other way around. 1 million tokens is 1 million tokens unless Claude Code is burning tokens and others are more efficient in token use.


They want you to become dependent on Claude Code, so that later they can milk you.

I'd say that they want Claude Code to become the standard, so that they can milk corporations on enterprise plans. We individual subscribers are nothing, but we'll go to work and be vocal about specifically having Claude.

Because models are quickly moving toward commoditization, whether the big three like it or not. The differentiator now is tooling around those models. By eliminating OpenCode's auth stuff, they prevent leaking customers onto another platform that allows model choice (they will likely lose paying customers to one of the major inference catalogs like OpenRouter once they move from Claude Code to OpenCode).

Why does Netflix care how the movies they stream to you are consumed? Shouldn't your $8/mo allow you to stream any movie to OpenFlix and consume however you like?

You are also not allowed to show these Netflix movies on a big screen in front of your house and charge people. The 8 dollar are for a specific use case, just like the tokens in the subscription.

Unironically, you should. In a more just world, laws would mandate service providers not obstruct third party clients.

The pricing would also be different.

Yes, content providers would have to compete with each other on price and library, and client providers could compete on UX and privacy.

Because they're selling discounted tokens to use with their tooling.

If you use Claude through an interface that’s not Claude Code, you’ll only stick with it for as long as it proves itself the best. With other interfaces, you can experiment with multiple models and switch from one to another for different tasks or different periods of time.

Those tokens going to other providers are tokens not going to Anthropic, so they want to lock you in with Claude Code. And it clearly works, since a lot of people swear by it.


because he is giving them at 90% discount in their subscription. they are more than happy if you use the tokens at api pricing, but when subsidized they want you to use their claude code surface.

> Paranoid that anthropic is going to torpedo them via some backdoor

Like with lawyers or something?


Rather the hypothetical situation where anthropic makes a code change to bun to have a backdoor.

Anthropic leadership is delusional, not suicidal, so they would rather use their lawyers.


[flagged]


Sad day when the hacker forum starts lamenting the poor copyright holders.

Hacker news is about hackers in the same way that the peoples democratic republic of Korea is about democracy.

I feel HN did have a more information-wants-to-be-free-ey, disrupt-the-incumbents-ey era, though. Or was it all a dream?

On what basis are you assuming that Anthropic committed greater copyright theft than Meta, OpenAI, and Google (not to mention many lesser-known options)?

Legally speaking they were found to have by a court and the others weren’t

When did that happen? Did they admit guilt in the big settlement, or was there a different case?

opencode is a very meh agent.

Source: i run pretty much all of these agents (codex, cc, droid, opencode, amp, etc) side-by-side in agentastic.dev and opencode had basically 0 win-rate over other agents.


I've been using opencode and would be curious to try something else. What would recommend for self hosted llms?

Very new to self-hosted LLM, but I was able to run Codex with my local ollama server. (codex --oss)

Anthropic provides subsidized access to Claude models through Claude Code. It is well understood to be 'a loss leader' so that they can incentivize people to use Claude Code.

OpenCode lets people take the Claude-Code-only-API-Key, and lets them use it in a different harness. Anthropic's preferred way for such interaction is getting a different, Claude API key (and not Claude Code SDK API key).

---

A rough analogy might be something like getting subsidized drinks from a cafe, provided you sit there a eat food. What if someone says, go to the cafe and get free drink and come sit over at our cafe and order food. It is a loose analogy, but you get the idea.


> It is well understood to be 'a loss leader'

You have zero proof for this claim. It's like people read somewhere that stuff and keep spitting it out again and again without understanding..


If it wasn't the case, the Claude API pricing would be the same, $200 for unlimited use. But it's metered.

We don't know if Claude Code bleeds money for every user that touches it. Probably not. But the different pricing is a strong enough clue that it's an appeal product with subsidized tokens consumption.


API is intended for a different audience - companies with a big pocket who aren't as price sensitive as private users. So the pricing will be different than for a private subscription.

That is not true at all. I, as an individual, can go and get access to Claude models via API today, for, I dont know, for a custom workflow I have.

What Anthropic is saying is - please dont use the API key from Claude Code for that.


There is huge value in getting people to subscribe to recurring payments. Giving people a discount to do so makes sense and does not mean that the subscription service loses money.

> If it wasn't the case, the Claude API pricing would be the same, $200 for unlimited use.

How do you figure? That doesn't make any sense to me.


It's not a loss leader - as in they're not making a loss on the subscription.

Because they control the harness(es) and the backend, they can optimise caching and thus the costs to them.


I'm giving up. Caching is optimized server-side on a product for which they can't control the client.

Loss leader doesnt mean $0. Loss leader means it is subsidized to attain another, larger goal.

Thank you, I understand all of this. My question was about the reference to "petty and bitter."

It revolves around how Open AI has much better models and how Claude Code engineers are a bunch of kids (which is kind of ironic).

What exactly are you referring to?

>usage subsidization

Is this actually the case though? Because I can't imagine what kind of hardware they're running to have costs per 1M tokens be above like $3.


This seems like pure misinformation. The code lines that are actually changed:

              hint: {
                opencode: "recommended",
                -anthropic: "API key",
                openai: "ChatGPT Plus/Pro or API key",
              }[x.id],
They're removing the ability to use OpenCode via Anthropic API key

This is what most people in the comments are missing. They are removing the ability to even use Anthropic APIs not just your Max subscription.

this not true. api keys are supported. only "claude code" is being dropped.

that code is just a cli hint to which LLM they recommend using. so they stop recommending anthropic. rightfully so.


Is this what the legal request demanded or is this just something that OpenCode is doing out of spite? Seems unclear. To me the meat of this change is that they're removing support for `opencode-anthropic-auth` and the prompt text that allows OpenCode to mimic Claude Code behavior. They have been skirting the intent of the original C&D for awhile now with these auth plugins and prompt text.

Using your API key in third-party harnesses has always been allowed. They just don't like using the subsidized subscription plan outside of first-party harnesses. So this seems to be out of spite

It is what the legal demands are. They requested removal of all Anthropic (trademark?) mentions.

Anthropic's issue was always them spoofing OpenCode as Claude Code, piggybacking on the subscription plan.

Banning them from using the pay-per-token API key would be bad business.


I believe parent is talking about a separate topic, not about this change.

LLM generated article.

I wonder if an LLM generated article would get the title to use proper English, though: "What if Python were natively distributable?".

It's possible LLMs pick up improper English, of course, since proper is some measure of what used to be a norm, but may presently be perceived as outdated.


Is it possible it's both?

evidently what becomes standard gradually changes. i believe you can see this in construction of the past tense (perfect tense) of verbs in Polish/etc vs Russian, where Russian just uses the grammatical past participle as if it were the simple past tense.

Speakers of English in the Americas make this same substitution, which sounds like a mistake to those who speak in the version of English taught in schools. They will say "i seen that" rather than "i saw that", for example, just as would happen in Russian.


I have a feeling people will begin to purposely use slightly incorrect grammar to give the impression they are indeed human in their writing.

definitely: look at groups choosing their own deviations to signal group membership. american slang groups for instance, including teen kids purposefully using jargon they redefine among themselves so parents are un-cool.

I mean, this completely falls apart when you're trying to do something "real". I am building a trading engine right now with Claude/Codex. I have not written a line of code myself. However I care deeply about making sure everything works well because it's my money on the line. I have to weight carefully the prospect of landing a change that I don't fully understand.

Sometimes I can get away with 3K LoC PRs, sometimes I take a really long time on a +80 -25 change. You have to be intellectually honest with yourself about where to spend your time.


Wow, quite surprising results. I have been working on a personal project with the astral stack (uv, ruff, ty) that's using extremely strict lint/type checking settings, you could call it an experiment in setting up a python codebase to work well with AI. I was not aware that ty's gaps were significant. I just tried with zuban + pyright. Both catch a half dozen issues that ty is ignoring. Zuban has one FP and one FN, pyright is 100% correct.

Looks like I will be converting to pyright. No disrespect to the astral team, I think they have been pretty careful to note that ty is still in early days. I'm sure I will return to it at some point - uv and ruff are excellent.


This is the way. For now, pyright it's also 100% pyright for me. I can recommend turning on reportMatchNotExhaustive if you're into Python's match statements but would love the exhaustiveness check you get in Rust. Eric Traut has done a marvellous job working on pyright, what a legend!

But don't get me wrong, I made an entry in my calendar to remind me of checking out ty in half a year. I'm quite optimistic they will get there.


Say what you will about Microsoft, but their programming language people consistently seem to make very solid decisions.

Microsoft started as a programming language company (MS-BASIC) and they never stopped delivering serious quality software there. VB (classic), for all its flaws, was an amazing RAD dev product. .NET, especially since the move to open-source, is a great platform to work with. C# and TS are very well-designed languages.

Though they still haven't managed to produce a UI toolkit that is both reliable, fast, and easy to use.


For big codebases pyright can be pretty slow and memory hungry. Even though ty is still a WIP, I'm adopting it at work because of how fast it is and some other goodies (e.g. https://docs.astral.sh/ty/features/type-system/#intersection...)

I assume this is pretty rare, but ty sometimes finds real issues that are actually allowed by the spec, like:

  def foo(a: float) -> str:
    return a.hex()

  foo(false)
is correct according to PEP 484 (when an argument is annotated as having type float, an argument of type int is acceptable) but this will lead to a runtime error. mypy sees no type error here, but ty does.

You probably just don't have the hang of it yet. It's very good but it's not a mind reader and if you have something specific you want, it's best to just articulate that exactly as best you can ("I want a test harness for <specific_tool>, which you can find <here>"). You need to explain that you want tests that assert on observable outcomes and state, not internal structure, use real objects not mocks, property based testing for invariants, etc. It's a feedback loop between yourself and the agent that you must develop a bit before you start seeing "magic" results. A typical session for me looks like:

- I ask for something highly general and claude explores a bit and responds.

- We go back and forth a bit on precisely what I'm asking for. Maybe I correct it a few times and maybe it has a few ideas I didn't know about/think of.

- It writes some kind of plan to a markdown file. In a fresh session I tell a new instance to execute the plan.

- After it's done, I skim the broad strokes of the code and point out any code/architectural smells.

- I ask it to review it's own work and then critique that review, etc. We write tests.

Perhaps that sounds like a lot but typically this process takes around 30-45 minutes of intermittent focus and the result will be several thousand lines of pretty good, working code.


I absolutely have the hang of Claude and I still find that it can make those ridiculous mistakes, like replicating logic into a test rather than testing a function directly, talking to a local pg that was stale/ running, etc. I have a ton of skills and pre-written prompts for testing practices but, over longer contexts, it will forget and do these things, or get confused, etc.

You can minimize these problems with TLC but ultimately it just will keep fucking up.


Don't know what to tell you. Sounds like you're holding it wrong. Based on the current state of things I would try to get better at holding it the right way.

I can't tell if you're joking?

My favorite is when you need to rebuild/restart outside of claude and it will "fix the bug" and argue with you about whether or not you actually rebuilt and restarted whatever it is you're working on. It would rather call you a liar than realize it didn't do anything.

this is a pretty annoying problem -- i just intentionally solve it by asking claude to always use the right build command after each batch of modifications, etc

"That's an old run, rebuild and the new version will work" lol

With the back and forth refining I find it very useful to tell Claude to 'ask questions when uncertain' and/or to 'suggest a few options on how to solve this and let me choose / discuss'

This has made my planning / research phase so much better.


Yes pretty much my workflow. I also keep all my task.md files around as part of the repo, and they get filled up with work details as the agent closes the gates. At the end of each one I update the project memory file, this ensures I can always resume any task in a few tokens (memory file + task file == full info to work on it).

Pretty good workflow. But you need to change the order of the tests and have it write the tests first. (TDD)

I mean I’ve been using AI close to 4 years now and I’ve been using agents off and on for over a year now. What you’re describing is exactly what I’m doing.

I’m not seeing anyone at work either out of hundreds of devs who is regularly cranking out several thousand lines of pretty good working code in 30-45 minutes.

What’s an example of something you built today like this?


Fair, that's optimistic, and it depends what you're doing. Looking at a personal project I had a PR from this week at +3000 -500 that I feel quite good about, took about 2 nights of about an hour each session to shape it into what I needed (a control plane for a polymarket trading engine). Though if I'm being fair, this was an outlier, only possible because I very carefully built the core of the engine to support this in advance - most of the 3K LoC was "boilerplate" in the sense I'm just manipulating existing data structures and not building entirely new abstractions. There are definitely some very hard-fought +175 -25 changes in this repo as well.

Definitely for my day job it's more like a few hundred LoC per task, and they take longer. That said, at work there are structural factors preventing larger changes, code review, needing to get design/product/coworker input for sweeping additions, etc. I fully believe it would be possible to go faster and maintain quality.


Those numbers are much more believable, but now we’re well into maybe a 2-3x speed up. I can easily write 500 LOC in an hour if I know exactly what I’m building (ignoring that LOC is a terrible metric).

But now I have to spend more time understanding what it wrote, so best case scenario we’re talking maybe a 50% speed up to a part of my job that I spent maybe 10-20% on.

Making very big assumptions that this doesn’t add long term maintenance burdens or result in a reduction of skills that makes me worse at reviewing the output, it’s cool technology.

On par with switching to a memory managed language or maybe going from J2EE to Ruby on Rails.


Thinking in terms of a "speed up multiplier" undersells it completely. The speed up on a task I would have never even attempted is infinite. For my +3000 PR recently on my polymarket engine control plane, I had no idea how these types of things are typically done. It would have taken me many hours to think through an implementation and hours of research online to assemble an understanding on typical best practices. Now with AI I can dispatch many parallel agents to examine virtually all all public resources for this at once.

Basically if it's been done before in a public facing way, you get a passable version of that functionality "for free". That's a huge deal.


1. You think you have something following typical best practices. You have no way to verify that without taking the time to understand the problem and solution yourself.

2. If you’d done 1, you’d have the knowledge yourself next time the problem came up and could either write it yourself or skip the verifications step.

I’m not saying there aren’t problems out there where the problem is hard to solve but easy to verify. And for those use cases LLMs are terrific.

But many problems have the inverse property. And many problems that look like the first type are actually the second.

LLMs are also shockingly good at generating solutions that look plausible, independent of correctness or suitability, so it’s almost always harder to do the verification step than it seems.


The control plane is already operational and does what I need. Copying public designs solved a few problems I didn't even know I had (awkward command and control UX) and seems strictly superior to what I had before. I could have taken a lot longer on this - probably at least a week, to "deeply understand the problem and solution". But it's unclear what exactly that would have bought me. If I run into further issues I will just solve them at that time.

So what is the issue exactly? This pattern just seems like a looser form of using a library versus building from scratch.


For one I’d argue that you shouldn’t just use a library without understanding what it does and verifying it does what it says.

But a library has been used by multiple people who have verified that it does what it says it does as long as you pick something popular.

You have no idea what this code does. Maybe it has a huge security flaw? Or maybe it’s just riddled with bugs that you don’t know enough to expose.

Maybe it “follows best practices” that your agents uncovered or maybe it doesn’t.

If you expose customer data, or you fuck up in a way that costs customers money, the AI isn’t liable for that you are.

Now if this is just a toy app where no one can be harmed sure who cares.


Hard to read due to LLM generated prose.

Yeah, it's quite bad. Just some of the classics:

- "Why This Matters"

- "That's accurate, but it's only half the answer — and the less interesting half"

- "this isn't an edge case. It's routine."

I'm at the point, I would just rather read something somebody actually wrote even if it's not grammatically perfect and has lots of spelling mistakes.


Unfortately the expectation of readers, and algorithms, at large is perfection.

If this contained various grammer mystaeks, but interesting content, it wouldn't have been flagged. As usual with LLM, it is based on other content. Show me the source, we used to say to binaries... ¿Que pasa?

So the upvotes were for? Anyway, we disagree — thats normal.

> As usual with LLM, it is based on other content.

Show me where else on the internet someone waxed poetic about a conceptual separation of transport and function regarding WireGuard. I dare you.

Show me another client library like the one in the article? That’s the double-dare.

Did you even read it?


Since you didn't think it was worth writing it yourself, I don't see how you can expect others to think it's worth spending their time to read.

So no, then? Thanks for your thoughtful engagement.

> Did you even read it?

Did you? That is the issue we have. We can't know for sure that you even read your own article, since it has all the hallmarks of LLM generated content. It's embarrassing.


> So the upvotes were for?

People getting tricked? Who knows?

> Did you even read it?

I quit when I figured it was written by an LLM. I'm not interested in reading LLM 'content' without it providing a source.

I am willing to generate some of my own sauce with a prompt, and then requesting the sources. That way, I know at least some parameters of the input and output.

But with your article, I do not know which sources were used as reference, I do not know which prompt you used.

As for HN, they're busy with tackling the LLM problem. They know it is a problem.


Again, this was novel content. If you find a source of anything similar let me know. I'm belaboring this point for one important reason: content matters. I want to see new thoughts, not repetitive mindless drivel in personal "voice".

There has to be a balance.


One thing I've seen before is people being upfront about using LLMs (at the top of the content). That way, those who dislike it will feel less tricked.

The balance at least on this site is strongly in favour of humans writing things.

You’re belabouring the point because you don’t believe that by filling the internet with slop you’re doing anything wrong when actually it’s antisocial and wrecks the commons.

If you think content matters so much then just invest the time in writing it yourself rather than trying to convince others that it is ok that you didn’t.


The pot calling the kettle black, methinks. How are you improving the internet by vilifying new ideas?

No. It’s authenticity instead of llm-generated blogvertising.

When I ask an LLM, one that’s vaunted here for it’s skill on code, to “clean up obvious errors and improve readability” how is that “LLM generated”?

Yes it’s advertising in that I believe in my product and write about it.


Dude. Give it a rest. You had the LLM write an article, you posted it here. You got called out.

Just write your own blog and this won't happen in future.


Sigh. I did write it, then I used an LLM to clean it up. Seriously, if you can find anything else out there making a similar point or providing a similar library I'd love to hear about it.

It did more than clean it up. It stained it completely.

You're absolutely right!

This is and has always been trivially configurable. Just put `Task` as a disallowed tool.


Part of the issue with legal weed is it's much like if all alcohol was sold as minorly different varieties of Everclear at 150+ ABV, and brands primary boast was just how potent and alcoholic their mix is. It doesn't encourage appropriate usage and IIRC many of these cases of psychosis are from consuming high THC products 24/7 for weeks/months/years on end.

If anyone is curious, check out brands like Rove, Dompen, Care By Design, which offer THC pens at very low dosage. They're frustratingly undermarketed and understocked, but as a CA resident I buy and use pens that are ~4% THC (rather than 90%+). A single puff occasionally after the kids go to sleep - the effect is marginally psychoactive, scratches the itch for "relaxation without impairment", helps me sleep restfully.

Completely different experience to high THC products. If you compare the literal amount of THC consumed, it's an almost 20x reduction. It's literally the equivalent to having a half glass of wine instead of lining up 10 shots.


I use gummies, ~4-5mg THC (ideally with some of the other TH- chemicals in it), deliberately kept my tolerance low so it doesn’t get more expensive (and I almost only use it for sleep, purely “fun” use is maybe a couple days a year). Take in the evening, start an MST3K episode about an hour later, really enjoy the back half of it, go to bed and fall asleep instantly, wake up feeling like a million bucks. Perfect evening.


I see a lot of people using weed for better sleep, but isn't weed supposed to interfere with REM states? I thought that weed would have the opposite effect that you say. Do you dream if you use weed before bed?


I rarely dream either way (unless I start focusing on that specifically, then my recall will improve quickly). When I was younger and would go to bed severely stoned I would wake up groggy and lethargic - clearly not optimal sleep. On 3-4% THC I usually wake up spontaneously and feel well rested. It mostly just helps me fall asleep and stay asleep. YMMV obviously.


It’s a pretty low dose, doesn’t exactly send me into space—heavy users might need 10x or more that dose to even feel it—just enough to make my brain shut up so I can fall asleep. I think a lot of folks who have a bad time when they try it start at far too high a dose (I wouldn’t even start at 5mg, maybe shoot for like 2), I also don’t much enjoy being properly high, anything past what you’d call a heavyish buzz I find unpleasant (and my standard nighttime dose doesn’t even quite get me to the heavier end of a buzz, that’s more the 7-10mg range for me, though I’d caution that some gummies seem more potent and some nominal-5s do get me closer to that than others)

I dunno about sleep quality effects, but it’s definitely better than even a couple beers (for me, these days) and it’s way better than lying awake until 3am… for the third night in a row. For most of the night it should be mostly worn-off, again, I’m not taking a ton and it takes longer to work through you in edible form than smoking, but we’re still talking less than half the night, especially as I usually time it so it hits just a little while before bed (I don’t want to get in bed without it having hit yet).

I don’t remember having had dreams most nights anyway, so I don’t know about that. Even with some help I’m typically a bit under the low side of the amount of sleep I ought to be getting, over a week. Lucky if I break the eight-hour mark two days of the seven, usually in the 6.5-7.5 range the rest (I don’t take a gummy every single night, either, gotta keep that tolerance at bay). I think I dream (or, at least, remember it) more when I get the rare series of several days of 8+ hours, but I don’t track it so can’t say for sure, and yeah, no idea the effect of weed on that.

I can vouch that at my dose level I get way better sleep than I did the one time I tried a prescription sleep aid, which was Lunesta. If I didn’t get a solid 9 hours on that I’d wake up feeling hung-over, weed doesn’t give me extra trouble like that if I fail to get a full 8+ hours. Hell, even a “good” night on lunesta didn’t leave me feeling awesome in the morning. Other downsides: it mixes worse with other things, had a glass of wine with dinner? Better think twice about the lunesta, at least according to the label. On some decongestant medicine (in addition to antibiotics) for a sinus infection, and the sinus infection is wrecking your ability to sleep so you could really use it? Might not be able to take it with the other stuff. Weed’s so much better for those cases especially, bump the dose slightly and nothing short of something that’s gonna hospitalize me will be able to keep me from sleeping, and it famously doesn’t interact badly with very many other drugs, so it removes the very worst thing about most common illnesses like that (for me, anyway) which is the extreme sleep disruption.


You get what you pay for imo.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: