Hacker Newsnew | past | comments | ask | show | jobs | submit | aaai's commentslogin

EXACTLY!

But... don't bother too much trying to explain this to ppl who don't intuitively grok it right away... some just seem to never get it no matter how hard you try and explain it to them, it's like their brains are "wired differently" when it comes to reading and understanding code, they don't get the advantage and meaning of unification / universality / "one solution for many problems" etc.

Go is NOT my favorite programming language, but there's a stroke of simple genius in it that probably only got materialized bc its creators were left alone to work on it their way inside Google and just implemented their solution to things without being bothered by "language experts"...

EDIT+: not saying "colorless + channels" is always better or anything like that, all's tradeoffs... probably async/await code is much more readable than channels hell for many/most cases (but because it's less powerful - no true parallelism possible).


Everybody's starting to figure out that to get affordable development you need:

1. fast development (both Python and node)

2. boring platform where it's usually OK to leave smth unupgraded for >5 yrs (Python wins here hands down)

3. cheap devs (good Node.js devs have a lot of option to "jump around" between front and back and frameworks so higher room for leverage ...with Python you can hire cheapest newly grads since it's now taught in most unis)

...on top of this:

- performance in terms of "requests served per CPU power" does NOT matter for most apps (and when it does caching saves most situations) - and if it does, modern Python can deliver now (async is mature)

- using same stack as ML and datasci comes as a bonus - you can have backend-full-stack-devs jumping between data pipelines, APIs/web, ML models, data analysis code etc. who can stay separate from frontend-full-stack-devs doing web frontends + mobile hybrid etc.

It would be great if we could also standardize on a static high performance language common to all (eg. usable for special-performance-sensitive-code either standalone services or Python-modules or node-modules) ...this area's a mess, too much overlap, duplicated work, and too hard for devs to jump from one ecosystem to another (Rust and C++ as lib/extension langs... Java/Kotlin or Go or C#/F# for standalone services... now Swift an option too...).


>(async is mature)

is Django or Flask async yet?


Why do ppl like Django and Flask so much? They're heavy awkward dinosaurs with nasty codebases perpetuating really bad coding patterns (globals?! (multiple)inheritance-till-your-head-spins?! - that's what that soup of whateverMixins is, let's stop pretending... serializers that DE-serialize and different stuff depending on situation?! ...stuff you couldn't add type annotations too because the types would depend of calling contexts far away etc. etc. etc.).

If you code Python, at least take some time to read more modern code: see responder (not my favorite but cool), fastapi, starlette, asyncio http examples etc... Or see older examples (as old as Django) like web.py.


Also, check out FastAPI, which is quite nice. Fully async with a lot of other nice features:

https://fastapi.tiangolo.com/


Does that have feature parity with DjangoREST?


...what would "feature parity" even mean? There's no universal checklist of api frameworks "required" features.

Short answer: no, bc FastAPI makes no assumption about your ORM (bring whatever, use none... your responsibility, your choice - I use sqlalchemy core + databases module for async support, no ORM, but but full use of Pydantic for enforcing schemas, just "divorced" from persistence layer, maybe you don't even need any), or your anything else.

I actively dislike DjangoREST and tbh Django too... but I admit that in combination they can save A TON of time when prototyping something.



Not that I'm aware of, but Tornado is a great async web framework. We use MongoDB and an ODM called umongo (that uses Motor for async, pymongo-like connections). I'm not sure what good async ORMs are available.



Use Quart in place of Flask


Can "Machine Learning Engineer" positions be remote for 2020? (can fly in to meetings ~twice a month or so, but for personal reasons + covid I'm only looking for remote or Bucharest, RO this year)


    Location: Bucharest, Romania (UTC+3)
    Remote: Yes
    Willing to relocate: not in 2020, possibly later
    Technologies: { solid: [ Python, Keras, TensorFlow, Pandas/Numpy/scikit-learn,
        Jupyter, Django/Flask/FastAPI/asyncio, REST, Docker, AWS, Google Cloud,
        SQL, Postgres, MySQL, MongoDB, Linux ],
      minor: [ Go, Fastai, Pytorch, R, Node.js, React/Redux/Vue/Svelte, Java, Bash ] }
    Résumé/CV: Upon request
    Email: io@neuronq.ro
    ---
Experienced software engineer (7+ years, Python expert, some ML experience, full-stack experience), looking for work that is at least 1/3 machine-learning-engineering or data-science-engineering related (current focus is NLP/NLU but open) - this is my growth direction and I am NOT interested in work that falls completely out of this area! I'm ideal for your team if you'd benefit from a combination of: (a) senior-level general software engineering (OOD/SOLID, TDD, functional-programming) + (b) mid-level machine-learning and machine-learning-engineering knowledge and experience (plus desire to grow more into this) + (c) very wide breadth of full-stack + product & project knowledge and experience.


...if you consider quality a lever, you're not the kind of person anyone should want to work with!

You agree on quality standards before you begin, and you deliver the quality level that you agreed beforehand, not more, not less (you deliver sooner if it goes better than expected giving the product-owner ability to add features or redirect effort actually using your "superpowers" if any to give the business a real advantage), at a profit or at a loss. Otherwise you're not a professional, you're either an amateur or maliciously dishonest/exploitative!


You could just as easily say that you agree on the feature set beforehand, and you deliver that feature set, not more, not less (you deliver sooner if you finish the features early).

"The quality level you agreed beforehand" is meaningless without some infrastructure and processes in place to verify it. And in practice, those infrastructure and processes are compromised at least as often as the feature set, the budget, or the timeline.


Thanks! Will digg more into Cortex and/vs MLflow now ;)


"Approachable" means that it might require some effort from the reader (eg. in worst case scenario, a reader not familiar at all with the domain would have to look up the definition of every word in a sentence and do that recursively a few steps, until he/she gets some domain familiarity by reading the equivalent of a few hundreds pages or less), but you can approach it, and in exchange it gives precise and rich information!

I like Wikipedia as it is, let's not make it into IdiocracyPedia for the sake of accessibility... if you're not lazy and you're willing to put some focus and time into your research you'll be able to understand quite unfamiliar subjects from Wikipedia, there's nothing blocking you, as opposed to eg. lots of academic articles that may contain un-google-able jargon and unexplained/unmentioned domain specific assumptions.


I've done undergrad level mathematics and a lot of Wikipedia articles about mathematics is complete gibberish to me even if I put a lot of effort (hours) in understanding them. My estimate would be that more than 99% of population would not be able to gain anything from reading them.


May be hard to understand for ppl. with no background, but that description includes the crucial bit of information that OPs "easy to grok" simplified description lacks:

> inhibit the [...] RNA polymerase of the [..] virus

If it would just simply stop RNA synthesis (eg. inhibit any RNA-polymerase) or if it would actually break RNA in general, it would kill HUMANS just as well!

The point of antiviral compounds is to selectively inhibit/kill mechanisms/components of the virus and not the human host... there's hundreds of thousands of antiviral and antibiotic compounds that are not very useful because they'd kill humans just as well or give them horrible cancers or god knows what else...

As the saying goes... "Everything should be made as simple as possible, but not simpler."


I noticed that hand waving in the OP too.

Any insight on how it manages not to harm non-virus stuff?

(Not a biologist. Also you're using a lot of italics and it harms the readability at least for me.)


Senior Software Engineer with 7+ yrs experience, looking to do more machine-learning engineering and data-science engineering work:

Location: Bucharest, Romania

Remote: Y

Willing to relocate: N

Technologies:

    - OOD / SOLID, functional-programming principles, Microservices, REST APIs, TDD
    - Languages and Frameworks:
      + Python (5+ yrs xp):
        * Django, Flask, FastAPI, aiohttp
        * scikit-learn, Pandas, TensorFlow (2.x)
      + Node.js
      + Other: SQL, React
    - Machine-Learning & Data-Science:
      + basic DS w/ Pandas & related tools
      + classic supervised-learning and clustering (GMLs, SVMs, RFs, Bayesian)
      + deep-learning (w/ Keras/TensorFlow): dense-NNs, conv-nets, LSTM RNNs
      + basic NLP models
    - Cloud & DevOps: AWS (EC2, RDS, EMR), Google Cloud (Compute, SQL), Docker, Linux/Debian
    - Tools: Git, Bash, Jupyter/iPython
Résumé/CV: https://www.linkedin.com/in/andrei-anton/, please email me for a more readable Resumé!

Email: io@neuronq.ro


> I feel like the developer experience is much worse than let's say Django REST Framework

Give FastAPI a try! You probably have DRF-Stockholm-syndrome like many do. The joy of having all pieces fit right in and be understandable and have validation auto-generated from type annotations (if you choose to) is a-f-mazing!

After using DRF quite a lot for quite a while I can clearly self-diagnose myself as being abused by it - such ugly patterns you always end up coding around and fighting the framework everywhere...

Use Flask if you're afraid of Python async, there's probably good reasons to be, never had to debug it in production... but LIKING DRF?! I cannot fathom that, I mean, one can like Django itself, it's good for what it was built to do, but what abominations ppl built on top of it instead of starting from scratch... ugh!


I don't understand which parts of DRF you don't like, or why FastAPI is better.

In my case the reason I like DRF is because it handles most of the work for me - I do my best to make my APIs RESTful which means I expose the underlying data model whenever possible. DRF makes that easy and I only need to handle the authentication classes and hiding sensitive fields in the serialiser.

As an example, let's assume I have a User model in Django and I want the mobile app to be able to read and edit the user's profile (name, bio, etc). With DRF I just create a ModelSerializer for my User model, put it in a ViewSet with the appropriate permission classes (so write operations are only allowed on the currently logged in user) and call it a day.

With FastAPI, looking at their homepage, it seems like I have to implement every single HTTP method (GET, PUT and PATCH in this case) separately? I just don't see the point.

Maybe if you're looking to create an RPC-style API then I guess it could get in the way, but for REST APIs I don't see why this is better.


You're probably lucky to have that use-case.

Most problems I had where with:

(1) deeply nested tree-like objects updates - model serializers have limited support for nested updates, but it breaks down fast, if your frontend-exposed data looks more like trees than tables you'll be fighting it all the way

(2) REST semantics can only get you that far! You end up doing some form of domain-driven-design sooner or later and your verbs are no longer (just) create/read/update - you have "approve", "reject", "approve with comment", "flag for review", "restore to version 123" etc. REST is not CRUD, you can have REST semantics with only R from the default actions and everything else can be domain specific.

In general I prefer to (a) not expose to the API the actual backend's data model in all its gory complexity and (b) figure out what actions make sense for the application domain, drop the CRUD handcuffs.

Once these choices are made, which I now prefer to do early on, pre-emptively, something like DRF becomes pure-pain, 0% gain: I use nothing from it, and it's too dumb to autogenerate stuff like validation or to offer some structural patterns from which to hang code for things like sub-object permission checks or anything like that...


I agree with you on 1) and I'm not saying models and serialisers will always be the right choice. In certain cases custom serialisers will make sense and you will have to write code, but you'd be writing the same code with a framework such as FastAPI or Flask.

Regarding 2), REST will get you most of the way there, and depending on what "approval" means (does it trigger some process, or does it just change a flag in the database?) it might just be a PATCH "approved = true" on the RESTful endpoint, but when that's not the case you can indeed extend DRF ViewSets with custom actions.

I disagree on not exposing the data model; unless the model is really bad or needlessly complex, I think exposing the model is fine instead of "fabricating" a new model full of RPC-style endpoints. I've wasted way too much time in RPC-land where every little extra bit of data needed a change in some other microservice managed by another team with a convoluted deployment process meaning the change took a good part of the day, where as if they were just using REST I would've had the data to begin with. I don't think CRUD is handcuffs, at least the "R" part feels very valuable to me.


This discussion was really valuable to see.

I started off with FastAPI, and it got me off the ground really quickly without a lot of boilerplate, and my API was mostly read only, internal, I didn't even bother with authentication at this point. The prototype worked well.

But then I started to add in all the grown-up, boring stuff and realized would have to do a lot from scratch. Now porting it to Django/DRF and the stability feels really nice. I kinda wish I had just started with DRF now.


There is a package for DRF that adds automatic validation from type annotations: https://github.com/rsinger86/drf-typed-views (disclaimer: I'm the author)

Regarding other commentary on DRF: I wonder how many pain points result from trying to squeeze non-CRUD operations into DRF's ModelViewSet/ModelSerializer constructs? My company's approach has been to use that DRF magic for dumb CRUD resources and use plain function-based views for everything else, with the freedom to use Pydantic/Marshmallow/etc. This has made us very productive and I can't image having to write create/update/delete/get/list operations one at a time.

That said, I'm sure there are use cases that DRF isn't suited for and I would definitely look at FastAPI for new projects.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: