Over the years I've experimented with getting rid of pull requests quite a few times. As a whole and solely as an abstract idea I wholeheartedly agree with the article. Code Review is not the same as a pull request.
With that said, despite advocating at points for pair programming, mobbing, incremental reviews and having worked on a variety of different systems, and having success with some of them, I keep ending up back with PR's for the majority of places I've worked.
The reason why, I think, is one of cognitive load. Pull requests are easy. I don't think they're the best approach to code review ( big PR's often don't get reviewed effectively, and being an effective PR reviewer is a hard skill ) but, it's a good fallback. Everyone knows what a pull request is, the tooling for supporting them is excellent and it requires the least upfront work to get a code review.
With that said, I would highly encourage folk to look at different ways of reviewing code. I don't think many people will fully replace Pull Requests with a solely straight to master development approach utilising non-pr code review styles, but, having the alternatives in your box of tools for specific situations is really powerful.
However, I don't see pull requests going away because, if nothing else, they're an easy, low cognitive load approach for having a second set of eyes on some code.
So its a little more complex and a little bit more nuanced.
The mechanism is part of the land reform policy and called the ‘Community Right To Buy’.
Ahead of time, a community body (which has a legal definition) can register an interest in buying qualifying land. All of these notes of interest are available from the registers of scotland and has to be renewed regularly/etc.
At the point that a landowner indicates to the land register that a piece of land is to be transferred (which could be due to public sale, or a private sale or direct transfer), the bodies with notes of interest are notified.
This then kicks off a whole process, including an independent market price evaluation, reviewing of land development/business plans and if that all goes through, final ministerial approval.
It can lead to land being sold at less than the offers over price (scotland has a weird way of doing land/property sales), or for more, depending on if the land transfer was a public sale or a private shift around.
Its not a perfect system and has a lot of flaws, but its worth being aware of the implementation.
However there are a lot of checks and balances to it. Not all land falls under the criteria and there's a lot of requirements around forming a group and the proposed use of the land ( such as being geographically local /etc ).
When I last looked into it, I think there'd only been 2 occurrences of the right to buy happening over a 4 year period.
Which does raise the question on 'Does the act achieve the goals it was setout to achieve' or is it mainly political posturing.
That's not something I can really comment on (nor would I want to).
To preface this, I'm not a massive fan of postman. I find the user experience to be counter intuitive compared to the likes of insomnia. With that said we use it pretty heavily so I might be able to provide some insights.
This is kind of expanding on koeffiezets comment.
For me postman's 'value add' can be broken down into three areas.
Technical Capability:
- UI alternative to curl
This is the most basic usage and a lot of the of other functionality is extensions of this. For simple get/post requests, this is definitely the case. I wouldn't trivialise it though. For folk not familiar with curl there's a lot of gotchas when it comes to escaping, handling auth, etc. I'd say that the UI on top of curl is more accurately viewed as an alternative to things like jetbrains's build in http client.
- The ability to import openapi/swagger/protobuf (as of recently) and generate collections
This will be the most commonly pointed at benefit of postman (and others like it) in my opinion. It's a pretty solid one, especially if you integrate with the API during your build process to version/upload the API specs.
This combined the the 'UI alternative to curl' really gives a lot of the foundational power for the other postman features. Even as openapi/swagger docs on steroids with a richer http client this gets pretty powerful. Especially with the sharing capabilities which I'll touch on under the team side of things. As with a lot of this stuff, you /can/ do this without postman. You could use the openapi client generator to produce a curl command.
- Auth handling
So postman has pretty rich support for a few auth types (api key, no auth, oauth 1.0 & 2.0, signatures, ntlm etc). This I think is where some of the power of postman really begins to shine (and tools like it). Handling auth in curl can be a real pain. Creating a way of handling auth which can be shared across a team becomes even more of a pain, especially if we're talking about auto refresh and the like. At this point you're really in the realm of writing small-medium custom scripts to wrap the auth handling, save the tokens, refresh.
Having a standardized way of handling this with the ability to extend it if needed can become a massive time-saver.
- Mock Servers
There's a bunch of ways to do mock servers and I wouldn't say postman is technically the best ( personal preference is stuff like wiremock). With that said, sometimes 'technically the best' looses out to what's immediately available. Having it built into the system which already has your open api specs, has SWE familiarity and is already there will often make this win out. It can also get folk thinking a bit more about their mocks/contracts than they would be otherwise because it's just part of the existing toolchain.
You could technically do this with netcat, or using a language specific approach, or another tool like wiremock. The first is going to be a pain to maintain, the second doesn't work great for multi-language environments and while wiremock and it's ilk are easy to get up and running with, they do require additional setup and management.
- Postman echo
Kind of an alternative to the like of pipewire or running your own nc/other implementation. Simple concept, simple implementation, but having the ability to create an endpoint to post/etc data to, see what the output looks like and run it in a place which other engineers can access and you can collaborate easily on the output is a nice to have. Basically saving on the setup time/individual contributor trying to collaborate side of things.
- Newman
This loops back to curl. A CLI tool for running postman collections. I'm giving it a special callout because if it wasn't for newman I'd have a lot more reservations about using postman. With newman, being able to take collections/imports from the UI and then use them with newman to do things like helm chart tests/continuous testing/run easily in a container allows the effort invested into creating stuff in postman and extend it beyond just the local dev experience.
- Other
There's also a whole bit around the API workflow/editor that I'm not going to touch on as I dont know that side of it well enough, but, it is there and something to be aware off.
'Team' Capability:
With everything above, it's important to remember all of this can be done in a shared collaborative environment with a full audit trail and potentially SSO depending on the tier. Removing all the friction from that is a pretty big deal (especially as companies grow).
The point about using a text based standard is valid (one of the things I like with jetbrains http client is this). But managing that for all the functionality of postman would be a challenge and bits would be likely to rot.
Just having a tool which does most of it well enough can be enough as it reduces friction.
Another point on this is cross os teams. We have a mix of linux/windows/osx users. Curl is great, and does work reasonably well on windows, but trying to maintain scripts/bespoke implementations/knowledge across folk on all these platform is a losing battle.
Integrations:
Kind of a final and often overlooked note, but there's also a rich integration system with postman. Stuff like integration with new relic/data dog/etc to record test results gives one example but it's a pretty solid ecosystem.
Closing thoughts ?
That was a ramble. To summarise:
- Postman is definitely bloated, but that bloat/bredth of functionality can be useful
- You can do everything postman does without postman, but depending on the team size/number of services/etc there's value in having a standardized, cross-os and easy to share solution.
If I was just doing stuff as an individual contributor or had a single team ? I wouldn't necessarily go with it. For larger orgs or as you go from startup-scale up there are definitely advantages in having a master of none tool to help adoption. Regardless of if it's postman, insomnia or hopscotch I think reducing it to curl with a UI is leaving a lot on the table.
We ended up using Puppet Relay quite a bit, which is sort of a halfway house between the two ( visual editor, but yaml backed with low code instead of no code).
The main advantage from my side was being able to get adoption from a wider group of people. This included both very junior technical folk who were able to take on more complex tasks than they would have been able to otherwise, and non technical folk who found starting with the UI a lot less intimidating.
Eventually, almost everyone has gravitated to using YAML over time but I don't think I'd of seen the adoption internally that we had without the UI component.
Mileage will vary though. If we had been at a more technically mature place when adopting relay, then I reckon the UI would have seen a lot less use.
Reading the article, what surprises me the most is that they had to search the mail in the first place.
For the information they were after, any relatively primitive mail tracking system would suffice. To be fair, even most outbound spam/virus scanners would log the level of info being looked for.
I can't help but feel as if it's a bit of an overreaction. A very targeted search was done on the mail headers to try and track down a leak. It seems fairly routine work. The lack of notification is a big issue (soon after the event) but even so.
To see the reaction you'd assume that the administrators were going through each persons mailbox reading every single mail.
With that said, despite advocating at points for pair programming, mobbing, incremental reviews and having worked on a variety of different systems, and having success with some of them, I keep ending up back with PR's for the majority of places I've worked.
The reason why, I think, is one of cognitive load. Pull requests are easy. I don't think they're the best approach to code review ( big PR's often don't get reviewed effectively, and being an effective PR reviewer is a hard skill ) but, it's a good fallback. Everyone knows what a pull request is, the tooling for supporting them is excellent and it requires the least upfront work to get a code review.
With that said, I would highly encourage folk to look at different ways of reviewing code. I don't think many people will fully replace Pull Requests with a solely straight to master development approach utilising non-pr code review styles, but, having the alternatives in your box of tools for specific situations is really powerful.
However, I don't see pull requests going away because, if nothing else, they're an easy, low cognitive load approach for having a second set of eyes on some code.