Hacker Newsnew | past | comments | ask | show | jobs | submit | pleonasticity's commentslogin

“Israel says…”


I started doing this too after reading The Artists Way by Julia Cameron. She prescribes these “morning pages” as one of principal tools for overcoming internal creativity blocks. I also really enjoy drinking my coffee and writing these pages with my pilot fountain pens.


I think the strongest hypothesis is that he combined the anti war movement and black freedom struggles. He was assassinated one year after his historic “beyond Vietnam” speech. https://www.americanrhetoric.com/speeches/mlkatimetobreaksil...



Looks like he rediscovered Pulsed Laser Deposition: https://en.wikipedia.org/wiki/Pulsed_laser_deposition


Not quite. Their process is in ambient conditions (not the vacuum of PLD) and produces molten droplets and not plasma. But for low-precision work (I deal in fractions of microns at work) this sounds like a neat & cheap DIY if you already happen to have a pulsed laser CNC setup.


I’m glad they actually tried synthesizing one of the materials their model predicted. Looks like they succeeded in synthesizing only 1 out of 4 of the materials for which they tried. The 20% accurate property claim appears to be for bulk modulus. I’m still seeing little value for this technology for designing electronic properties, mainly because density functional theory which provides the training data is not reliable. Their code looks nice and clean and well organized, perhaps I’ll give it a try.

My biggest problem with this application of AI is trying to approximate DFT, which itself is an unreliable approximation. The claim is it lets you amortize the expensive DFT to search the space, but it’s also true that especially for inorganic materials, training sets do not appear to promote strong generalization. So you embark on an expensive task to wind up back with unreliable DFT. I think perhaps the best goal would be to try to make DFT itself better, and I have seen impressive albeit computationally expensive approaches, e.g. FermiNet by DeepMind.


I agree that DFT is an approximate solution to the Schrodinger equation, but what would you like to see them do? Quantum monte carlo or configuration integration? These methods do not scale well especially when heavy elements are involved. DFT is the current sweet spot for accuracy vs computational complexity in this field. Making DFT better has been an on-going effort for the 30-40 years at least. It is not an easy task. For many real world materials, DFT is the best we can do.


I like this paper and it appears to be one of the best in the literature so far for AI for materials. Even DFT is not really scalable for this, computing the ground state of even a dozen unit cells requires many many CPU-hours. They themselves in fact relax the proposed structures by minimizing the energy of psuedopotentials, for even DFT is too expensive for that step. I said already I think improving DFT itself is the most potentially impactful application of AI in this space, in my opinion. Of course approximations are always necessary, I’m not at all against that, but DFT ignores or approximates correlations by design so there is an inherent limitation there, which means, if you train your models to predict that, it will have the same limitation. It’s just like with LLMs, only imagine training principally on synthetic data. Obviously LLMs have succeeded with limitation sources of synthetic data, but they are principally trained with “real” data.


I agree, but improving DFT is a much harder problem than running AI on crystal+property databases. The interesting question is why is it harder?


I share your viewpoint on this, that DFT is a poor proxy model for ML to approximate.

However, the alternative of using, for example - experimental data, is that the synthesis procedures, measurement parameters, sample impurities, and even differences between experimental apparatus means training datasets of even modest size are insanely heterogeneous. So models either are either trained to predict differences between materials due to experimental discrepancies, trained on very small datasets, or must have a slew of post-hoc physics-based adjustments added to get reasonable numbers.

Higher order computational methods (including simply more intensive, non-high throughput DFT) are accurate but expensive as you know. Some of them have systematic error in the way DFT does, and are essentially based on user choice of (many!) parameters. Charged defect calculations are on example of this. Finding large (>10^4) training sets with similar parameters for computation is difficult. “ML” for these kinds of calculations usually consists of like, calculating a hundred (or 10) crystals within a narrow chemical system, doing a linear regression on one variable (eg, valence of cation on some site), and getting numbers +\- 10% of a “true” number.

GGA/meta-GGA DFT, on the other hand, can be applied at a sufficient fidelity to get real(ish) numbers in a homogenous way across huge numbers of crystals. So you are correct, you are predicting an approximate number for a property in many cases. But if we know the approximate number is wrong due to systematic error (and we can, in some situations) we can apply corrections or higher order methods to get the right(ish) answer. More, it’s highly dependent on which property you’re interested in. Some properties, like band gap, can be off by a lot. Others, like formation energy, can be calculated pretty accurately even with run-of-the-mill GGA DFT. Elastic moduli are generally ok.

in summary, approximating DFT with ML is just the least messy way to get real-ish answers across a large number of materials. Of course, there’s a point at which low-fidelity DFT calculations are - (1) so cheap and (2) so inaccurate, generally - that having an ML model approximate them is pointless. Most large DBs of materials now use good enough DFT that the numbers they calculate are not pointless for ML to learn from.

In the future, I think models trained on large numbers of DFT calculations will have to be applied to narrow sets of higher fidelity calculations by tuning. Much like you can fine tune a generalized LLM to do specific things. That might be where ML can actually bring real value to materials design.

Also, it’s worth considering that synthesizing novel materials can be insanely difficult. So 1 in 4 is not bad in my opinion.


These cyclists unfortunately need to just point their headlight downward--included in the installation instructions for almost all bike headlights is the direction to point the light below horizontal. That is also the only difference between automotive high and low-beams: the angle at which they are directed.


You would think that, but most bike headlights are just throwing as much light as possible in a 180 degree arc so the rider can spot drop bears and other related hazards. It is so rare for someone to have a light that directs the photons downward like an automotive headlight. When I was looking for a headlight my local bike shop didn't sell a single model that had that feature, in fact they made a big deal about having 180 degrees of illumination and 1000 lumen output and an oncoming traffic blinding strobe mode. Also, the bike shop stock was hilariously overpriced for what it was. $5 of flashlight components in a plastic case and they wanted $80. It felt like the entire industry was being grossly overcharged and underserved.

I had to construct my own headlight out of a flashlight and a homemade deflector.

For example:

https://www.planetbike.com/products/beamer-700-bike-headligh...


The shape of the beam is important, not just the angle of it. Part of what differentiates a low-beam headlamp from a high-beam headlamp is the shape of the beam.

Bike lights, at least as-sold here in the States, seem to generally be built from flashlight parts. And unlike car low beams, flashlights project a circular beam.

With a circular beam, it is really hard to illuminate the path ahead with any meaningful brightness without blinding others.

A better beam pattern, ideally with sharp cutoffs, can illuminate a pathway and the obstacles that may be on it without unduly blinding others.

---

I happened to buy one such light, just by chance, several years ago. It's a "Schwinn Intensa 100" from Wal-Mart, part number SW80251WM. It kind of sucks in terms of overall illumination and is no good for high-speed rides at night, but it does light up the path ahead and provides a sharp beam cutoff to avoid blinding others. (So actually, it's pretty excellent for casual riding.)


> These cyclists unfortunately need to just point their headlight downward […]

On well-lit urban streets, you probably don't need more than ~500 lumens, as the road is already illuminated and the light is mostly about other people seeing you.

Also, having it flash in a consistent manner (and not some kind of "random" cycle) is best, as it's easier to track a simple on-off pattern with a deterministic frequency.

(If you're riding on non-lit roads or trails, then certainly more lumens and further throw is useful.)


This is great work, but HumanEval is an extremely limited benchmark and I don’t think you can seriously claim to beat GPT-4 at coding based only on that metric.


Fifth sentence:

> However, we’ve found that HumanEval is a poor indicator of real-world helpfulness.


Thank you. You're right -- which is why we rely on feedback we've received from our own users for that claim. Many of our users who have the choice to use either GPT-4 or the Phind Model on Phind choose the Phind Model.


You likely know this, but keep in mind the kind of selection bias in taking feedback mostly from your own users. The number of times I've heard product designers claim that their users prefer some aspect of how their application already works, ignoring the fact that the users who didn't prefer it have left and hence are likely not available to survey.


Of course. We do our best to talk to churned users as well, but we're doing this Show HN to get even more diverse feedback.


I understand, but big claims require big evidence and so it’s still IMHO not rhetorically a strong position. I’m glad people find it more useful!


SK is basically langchain in C# or .NET.


Hi, article author here,

Semantic Kernel and LangChain are both geared towards NLP but they have different takes on it. While LangChain revolves around creating sequences of calls known as "Chains", SK employs a "Kernel" to manage these sequences and has a "Planner" to auto-create chains for new user needs.

SK steps up the game with plugins supporting both semantic and native functions, which isn't a feature in LangChain. Also, SK has a memory feature to store context and embeddings, broadening its use case.

Moreover, SK is more welcoming to C# integration alongside Python, and has a knack for blending AI services like OpenAI with conventional coding, which LangChain doesn't offer.

So, in a nutshell, while there are similarities, SK packs more features and a bit of a different approach compared to LangChain.

Hope this clears things up!


Not sure if I agree here. Langchain's LLMChains while initially popular, is not what people have been using. With function calling and agent+tools, both langchain and chatgpt do quite a lot of the same things that semantic kernel does.

You've mentioned a few things that langchain doesn't offer and I'm not sure how true that is. Langchain has a typescript offering which is easier to interweave with traditional coding, and if you're even the tiniest bit serious about your ML system then you'll likely use python anyway.

Langchain is a bunch of thin abstractions that can quickly scale up with the pace of growth in the LLM world. All the features you mention are now mature features within langchain too.

I'm genuinely interested in semantic kernel, and langchain has obvious pitfalls. But I am yet to find another solution that moves as quickly, integrates broadly, has a large community and still ends up being an acceptable product from a quality standpoint.


I work on Copilot now, and I can say this guy is totally full of himself. Just because he contributed to the first prototype does not entitle him to claim creation. He wasn’t even the first person inside Microsoft to use LLMs in an IDE completion experience. The work from prototype to product is an order of magnitude more than prototype alone.


Orilly? We had no PM, EM, or designer: I played all those roles for 1.5 years.

I was one of the first to touch the OAI code model. Me and Albert developed the in-the-wild testing harness still in use today. We pulled all nighters to get GitHub approved for participation in the MSFT-OpenAI deal using those test results.

Existing Microsoft AI teams worked to halt our work, pushing their own small, worse models instead of OpenAI's.

I protoyped and lobbied for creation of the VScode extension. I invented and hacked the ghost text prototype into VScode, I invented the block based termination and implemented all the tree-sitter based logic needed to do it. Then I had to lobby up to Satya to get VScode to implement proper support in less than 6 months.

I named it Copilot.

I implemented GH auth, made the waitlist and onboarding. Helped design the e2e http/2 go server, after designing the fast.ly based precursor. Coordinated moving from OpenAI datacenter to Azure to improve Asia experience, and oversaw the cutover.

I was Chief Architect. It was my baby. Sad if this is how they are spinning the story internally at GitHub today.


Seems like people are eager to attack you because you've shown a bit of arrogance, but if what you wrote is true then it's completely fair to feel frustrated about it. I sympathize with you. I used to feel this way about much much smaller achievements of mine.

Even though I don't use it personally, Copilot is a great tool and the bonus is laughable when you put it next to the impact it had on how people write code. For me lesson learned was that in order to spend your career in a good place and to save yourself months or years of time is to voice your concerns early and move on if all you hear are promises without any follow up.


If your title was Chief Architect then accomplishments like this are baseline expectations for an IC, and your comp would have reflected that. It would have been well into 7 figure territory.


I don't think "chief architect" is a title at Microsoft, and it's probably worth noting that unless you're at the partner level, your pay at Microsoft will be pretty terrible compared to the rest of the tech industry.

That said, Alex's title appears to be principal engineer at GitHub, which is roughly equivalent to principal at Amazon in both expectations and comp. I worked at AWS for just shy of six years, and I can tell you that the person who created Firecracker -- which underpins huge portions of AWS's technology -- was at the same level, and while they were promoted shortly thereafter, they didn't receive a bonus (because comp packages at that level don't include bonuses at all). So, yeah, Alex is justified in sharing these details publicly, but this is just how it works in tech. (And, frankly, all the underpinning infrastructure that supported him wasn't his work -- he might have come up with Copilot, but would he have been able to without the work of thousands of others?)


This is the correct take. Copilot impossible without the years of work from the brilliant people at OpenAI, VScode, Azure, and GH.


Exactly. He did his job, and was paid handsomely for it (and whether or not it became anything).

He leveraged massive resources to do it and was allowed to tap people with multi-decades of experience (even implicitly - you think he build the execution engine where his "test harness" ran?). Not to mention that he essentially wrote a wrapper around an API..

If he put his total comp this pity party would shrink to one.


That’s the reality of being employed, really.

No matter the accomplishment, I doubt they would have been able to create copilot outside Microsoft and thus without Microsoft’s resources (computing infrastructure, engineering talent and software ecosystem).

And the “join a startup” is a dumb piece of advice as you’d be still making some one else profit.

Create your own company, and face the likelihood of bankruptcy.


Driving a brand new product from the ground up is not in the job description for almost all levels. Only maybe if you're one step below Distinguished Engineer.


It is (or was) explicitly in the role guideline at MSFT for principal level. There's lots of variation at that level. Some sit on their butt and only chime in when they have expertise, others prototype new ideas of their own. That's how a few things I know came to be.


No offense but this comment sounds really egotistical and shows a total lack of self-awareness. Software is all about teams and if you have one guy who thinks he's god's gift it ultimately hurts the whole team. I'm not surprised your compensation is much less than you think it should be, I suspect Microsoft has a more accurate view of your value than you do yourself.


> Software is all about teams

I mean sure, technically this is correct. But it also fundamentally takes away from independent contributors and visionaries. Sure, "software is all about teams," but Linus Torvalds is definitely the guy behind Linux, and Palmer Luckey is the guy behind Oculus. It's unfair to take their achievements away because nowadays millions of people people contribute to Linux and Oculus was acquired by Meta.

I don't know OP, but I've seen this play out dozens of times in the cutthroat of corporate day-to-day, so it wouldn't surprise me for Microsoft to have a revisionist interpretation of how Copilot got started.


My own threshold for "I created X" would be "I was solo dev, until it got legs of its own and started needing serious outside help." I don't know the story behind Oculus, but objectively Linux and a host of other famous OS projects do have origin stories like this. Yet for this project -- even by his own telling, it wasn't just him, but "me and Albert". Okay. Not to mention it was all done in a bigcorp environment -- which by definition means tons of support from all kind of people (even while others may be simultaneously blocking or competing against you).

That's why the "I created" self-description rings hollow in my book -- as it does in essentially every other corporate / large-org case I've heard of (whether done as self-promotion, or the puff stories we're told about people we're supposed to hire or have run our teams or otherwise).

And which is why even people like Steve Jobs, who of course "was" the products he was famously associated with -- never went around saying "I created this." Not because he lacked for ego, or because his contributions were not monumental. But because he was smart enough to understand that once things get to a certain scale -- this kind of attribution is both absurd, and entirely beside the point.


> Software is all about teams and if you have one guy who thinks he's god's gift it ultimately hurts the whole team.

I don't see this reading in the comment above. He is just sharing the passion and sense of ownership he put to create a great product. BTW He mentioned that it was a team effort in the thread, so I don't see any problem issue here.


> Software is all about teams and if you have one guy who thinks he's god's gift it ultimately hurts the whole team.

John Carmack would agree with you.

And it's funny because while Carmack was that single person which was pretty much the best dev in the whole company, Romero was a better fit of your description.


Carmack obviously is (and was) a genius developer but I don’t recall him ever saying “Doom was my creation” or “Quake was all me.” Even though he clearly had a huge impact on the success of those games. He’s always been (in my eyes) a pretty humble and easy to work with person (this is seconded by some people I know who worked with him at Meta) which has contributed to his success.


Probably because we all know that already? People literally worship Carmack. There's a book titled "Masters of Doom".

I know I do :D


This total lack of empathy is why our industry is a circle wank of dudes. Why do you feel so obliged to defend the most evil company in history? And why do you feel the need to make personal attacks at some dude, is it because you think you are so much better yourself?


It's not about defending Microsoft -- it's about defending and acknowledging the dozens if not hundreds of other people who worked on, and continue to work on, Copilot and making it a commercial success.


A better approach to doing that (for the corporate employer) might be to say 'Here's a million $ bonus for the team, please divide up amongst yourselves as you see fit.'


that's probably what happened...


> the most evil company in history?

Microsoft may be bad but it's not the most evil company in history, not even by a long shot.


I'm not disagreeing with you but I'm curious as to who's on your short list of most evil company's in history?


The short version are all of the "X East India Companies" that were formed in Europe that effected much of the West colonizing Asia. The list of atrocities is immense.

Exxon and Texaco (now Chevron) would also be high on the list.

I want to be clear: I am absolutely not giving Microsoft a pass here. They've done quite a lot of bad. But they're a relatively new kid on the block vis-a-vis other companies out there.

PS. Within tech, I would also say that Oracle might top the list of 'evil companies' - but in their case, they really seem to actively embrace it (not that that's a good thing).


Completely forgot about the "X East India Companies"!


The overwhelmingly most valuable (and difficult) part of copilot is the work done by openAI. You're acting like you single handedly built it.

I can't stand working with people like you, classic main character syndrome.


Interesting that you switched to “we” and “our” pretty quickly in this comment when describing the work…


Your response basically confirms the parent commenter's assessment that you're completely full of yourself and sound awful to work with.


Btw, your message behind this is that you should work at a startup (maybe yours), is that correct? How do you plan to compensate individuals who do this type of work at your own startup?


Thank you for creating copilot. It's a great achievement. Sad that you did not get the recognition you deserve and grifters stole it.


In your mind, what would be fair compensation?


It depends on how revolutionary the product or service was. Not sure if any company still does this anymore, but the way Google rewarded and recognized the initial teams behind stuff like gmail and Google maps seems more fair. Based on the comments I’ve seen so far, i doubt that OP received much of either from GitHub / MS.

Edit: I’m not saying that MS / GitHub is obligated to do anything more than it already has. From a PR perspective, it’s just far cheaper to give a fatter bonus and promotion to early contributors to a big product and publicize it. It will also encourage other employees to push instead of just cruising. Otherwise, now you have someone making a logical case for not working very hard for either GitHub or MS


Not the op, but looking at MSFT on levels.fyi at least the last level of Principal SDE (e.g. 67) would be the minimum I'd grade Copilot with. 68 or 69 would be more appropriate.

I am specifically referring to the features he lists in his comment above. E.g. ghost text, block completion, and OpenAI integration.

To add to his credit Copilot in VS proper still sucks, so I'd estimate original VS Code hack to have better skills than the whole (e.g. the sum of) VS proper Copilot extension team.


Their profile does say "creator of GitHub Copilot, Dropbox Paper, MobileCoin, and Hackpad" which indicates a bit of an ego problem. Professional software is a team effort, and pretending to be the only person who worked on a project is a red flag. Most people would say lead developer rather than creator...


Well, most interviews I’ve been a part of always emphasize “I wanna hear what YOU did, don’t talk about the team.” It kinda optimizes for embellishing ones contributions, and has almost become expected.


It's also part of the dog-and-pony show of self-promotion most companies make you do in order to justify promotion. Being Engineer 1 on a team of 6 that made GreatProductX is never enough. You always need to show what you single-handedly did. So all of a sudden, six different people claim to have "created GreatProductX".


No, at least seven people make the claim. You forgot about the managers claiming credit for their employee's efforts.


It just couldn't have happened without the delivery managers, project managers, product managers, scrum masters, project controllers, business analysts...

What a funny world we live in! :)

Getting back to reality, ideas and prototypes are easy. Execution and delivery is hard.


This. What we see is nothing more than what the job market actually reward and incentives. This person is just playing that game to boost his personal brand. You may not like it, but if you don't play this game in the current job market you will have quite hard times.


Thats not what that line of questioning means; they are trying to understand what your actual contributions were, and how well you can both explain and contextualize them. Embellishment is why we have such a heavy emphasis on coding interviews these days, sadly.


The trick to effective min-maxing in social contexts is to make sure people can't tell you're min-maxing.


What's min-maxing?


Concept that comes from RPG's, or any sufficiently complex optimization problem where you have a limited number of total points to spend so you "min"-imize the least helpful stats and "max"-imize the most helpful. For a fighter character, you'd obviously max out strength, as well as constitution and dexterity. You'd minimize intelligence, charisma, and wisdom - not because they’re not helpful but because you only have a limited number of points to spend. In the context of:

> The trick to effective min-maxing in social contexts is to make sure people can't tell you're min-maxing.

It means that, while in interview situations there is an expectation that you say "I did this", in social situations you might get more benefit from seeming to be more humble and appear to withhold your accomplishments, perhaps doing it in a way where someone else fills the gaps for you, or it entices the other parties into doing a bit of digging on their own and perhaps find some well-placed bios online that look like they weren't written by you / at your behest which do your bragging for you.

This avoids the situation where someone will say: "I work on Copilot now, and I can say this guy is totally full of himself." as other people respond "huh. yeah. that makes sense."


Its a synonym for "optimizing" or "playing the game perfectly".


2023 slang for “optmizing”, comes from: https://en.m.wikipedia.org/wiki/Minimax


Not 2023, and not from the AI thing. It comes from RPGs, like what the sibling comment explained. It must be as old as D&D.


While it may have existed for a long time, it's gotten significantly more use in recent days in a non-RPG settings. Quick illustration from HN comment section[1]:

- From 2010 to 2019: 40 occurrences in HN's comments

- Compare that to over 70 for just the past two years.

Basically, it's as if you said “Woke” wasn't a contemporary word, because it used to exist in a niche for a long time.

[1]: https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


FWIW, while I don't know the origin of this term, I learned it first as a noun, used in the context of AI - the old-fashioned AI, as applied to game development, some 15 years ago. The verb form, however, is something I've only noticed people using in the last few months; it's possible this is due to some recent events that made the term more widely known / popular.


I have never seen min-max used as a noun and I'm not really sure what it'd even mean as a noun, but I've been using it as a verb for several decades.


It's an old-school algorithm family for determining optimal play in turn-based PvP games.


> Most people would say lead developer rather than creator...

I think "creator" is entirely appropriate in this case based on the features that he contributed.


[flagged]


Their is still correct…


[flagged]


It is and has been since roughly the 14th century. Although, it's apparently a subject of contention amongst grammarians (plural vs singular).

https://www.etymonline.com/word/their


There are lots of things from 14th century English that are no longer used today. Language evolves.

Singular “they” has been considered an error for a couple hundred years.


I see you're on the singular side of the contention amongst grammarians (plural vs singular). I think you're wrong as do a lot of other people. The fact it's been used widely in this context for the past couple hundreds years proves you wrong.


> The fact it's been used widely in this context for the past couple hundreds years

Citation needed.


I already provided one. Where's yours?


~600 years of historical English usage disagrees with you [1].

[1] https://public.oed.com/blog/a-brief-history-of-singular-they...


There are lots of things from Old English that are no longer used today. Language evolves.

Singular “they” has been considered an error for a couple hundred years.


Your source cites usage from the 1300s. It says nothing about the last couple hundred years.


If he built the first prototype, I'd call him a creator of it. It had absolutely nothing to do with who was the first person inside of Microsoft to use LLMs in an IDE completion experience.

I call everyone who builds the first thing the creator(s). This isn't uncommon or weird in my mind at all.

He may be totally fully of himself, but I don't see any realistic argument that he isn't one of the creators.


He's not calling himself "one of the creators" though. He's calling himself the creator, which shits all over the hard work of OpenAI, "Albert" and more.


I work on Copilot now, and I can say this guy is totally full of himself.

Thanks for the reality check, and Occam's Razor suggests this is (unfortunately) the most likely explanation for what we're reading.

Sure, employees get taken advantage of, and screwed over by politics all the time. But something about going online and openly (and quit caustically) negging your employer while still on their payroll suggests that there's something seriously off about this guy.

And that in any case, his synopsis of the situation is not to be taken at face value.

Edit: Actually we don't have information about his current status with them. My mistake. Thanks to the commenter below for pointing that out.


Wait he's still employed there? How is this not going to result in a HR call up later on? How can he possibly think this is a good idea? I was under the impression that he already moved on to another company and was just bashing his old employer, which is still a bad idea, but this is just weird.


Shh let me enjoy this boss battle


Maybe, but he is in the comments here defending himself. I wouldn’t be surprised if this was something where all the heavy lifting was done by a couple people. That’s often the case. If it’s true, really doesn’t make me want to work at a huge company like that.


> Thanks for the reality check

What reality check? One online commenter you don't know contradicts another online commenter you don't know.


The second commenter (at least claims to) be currently working not just at MS but on that very project. They could be trolling, but I doubt it.


> They could be trolling, but I doubt it.

And you're basing the truthfulness of his statement on what, exactly? That it corroborates with how you want to feel about the original poster? Hardly a "reality check."


Common sense, and a reasonable Bayesian prior for these things. And the fact that (whatever his intrinsnic talents) what the first guy was saying just smells like BS, and he seems backpedal a lot.


I was definitely getting ESR vibes when reading the Twitter thread, and was not surprised to see my suspicions confirmed here.


What does ESR stand for in this context?


Some guy who went around claiming to be responsible for all kinds of stuff, when in fact he just didn't do all that much.


Eric S. Raymond I assume. I’m not sure what the reference is about. Maybe some internet lore I missed.


Yes, Eric S Raymond. He was well known in the late 90s / early 00s for taking credit for all kinds of things that he did not really contribute to.

I believe that he turned 100% conspiracy theorist in recent years. He famously blames Alan Turing for his own judicial punishment and suicide.


In a later tweet, he even mentions there were 6 other people and they built on top of existing work. Doesn't really sound like he's within a mile of solely creating anything, no matter what spin he likes to apply.


> correction: + much of OpenAI eng, plus years of their bleeding edge research

Pretty telling


Also, the hundreds of GitHub employees who worked on GA. It was a dogfooding project internally before talented engineers in GitHub turned it into a commercially viable product.


> The work from prototype to product is an order of magnitude more than prototype alone.

The risk from zero to prototype is an order of magnitude more than the risk from prototype to product.


What? Prototyping is often the easiest and most fun part of projects. Often a prototype taking 1-2 people a few weeks turns into an entire team fleshing it out full over the next year or more. Thats a lot of risk if it goes wrong. Theres usually very little risk in prototypes going wrong, thats one of the main reasons to have a prototype phase to begin with.


You make one prototype in a few weeks with a couple of people then call it a day and start fleshing everything out???

Dude, it might take a few weeks to work out if a big project is even viable in a large company.

Prototyping a full product is not the easiest part of the big project by a long shot. Prototyping is when you find and create the constraints, reduce the chaos and make the project legible so other people can work on it.

If you do it right, it should make it easier for other people to work on the project not harder.


What’s the risk in doing your job inside a large corporation?


"Won't somebody think of the corporations?!"

The older I get, and the more I work on aspects of trying to build businesses, products, and services, the more I lament the loss of a kind of service-oriented ethos that truly makes societies great and that seems to only be "refreshed" (in America, in particular) by the darkest of times*. Here, I think of the people who went through the first half of the 20th century in comparison to the MBAs and similarly naively "optimizing" agents elsewhere in important / controlling positions in current society.

Game theory 'says' that anyone can choose D(efect)-heavy strategies, and in many environments, profit heavily, personally (aka, "The only rules that really matter are these: what a man can do and what a man can't do..."). But, a society with too many such players is not long for this world.

* Much as was said by a certain Founder about the "tree of liberty" - many of the Founders, including even TJ, who could be quite self-interested, understood that liberty requires responsibility and sacrifice ...


So bonuses and performance based compensation shouldn't be a thing?


by his own admission he received a bonus and a promotion


20k and no promotion for de-risking copilot? Context: https://www.levels.fyi/companies/microsoft/salaries

This dude is more than justified in being disappointed. The value added by de-risking a product of this size is massive.


20k bonus is a joke of a bonus. probably like < 2-3% of the total comp?


How so? If anything it seems the opposite is true. (Prototyping, almost by definition, is almost zero risk. While bringing stuff to market requires real resources, and invokes actual brand and reputation risk. Might be different in some situations, depending on the resources required at respective stages, but that seems to be the long and short of it).


Sure but a lot of things don't happen without that one person pushing for it. If Alex is correct that he's the one who pushed for it to happen and created the momentum for this product feature to exist, despite many others actively pushing against it, that to me deserves a lot of credit.

Now, I have no idea if that's true and to what extent it's true - it's common for ICs to be unaware of the work others have done behind the scenes to convince the right folks and unblock their work (not saying that this is what happened, just that it's not inconsistent with Alex's perception that he drove this himself) - but I'm not sure how anything you're saying disproves his version of the story. You could say that about just about every startup founder.


What was his contribution to the project? Given the information you give us, he may be the creator of copilot. He may not have made the product, but might be his father, the one who did the crucial original work with most added value, even if it's only 5% of the work.


> He wasn’t even the first person inside Microsoft to use LLMs in an IDE completion experience

It is you who is full of oneself, your attempt to belittle his contributions shows the kind of human you are. Stop hiding behind the keyboard and tell that to his face.

Downvote at will.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: