Looks pretty great! The free tier also looks reasonable. The pricing on the other tiers isn't outrageous either if you use it consistently. Unfortunately, I likely find myself in the big gap between the free tier and the Basic plan. I can't justify yet another subscription that I use only a couple of times a year. That said, I would happily pay the $6 on the months that I use the service. Given the churn issues, I'm surprised more SaaS offerings don't work that way.
JobLens is cool! This past month we had the same idea -- it's been a fun project:
https://hnjobs.u-turn.dev
ChatGPT does an incredible job parsing, but then lots of effort goes into normalizing and deduping each field. Long story short, your results look quite good to me!
Your project looks quite impressive as well, especially the extracted URLs to apply and the candidate profiles, didn't get that far yet :) Automating tedious work like data extraction and transformation is a great use case for LLMs.
Not sure if you found this as well, but gpt-3.5-turbo-016 does a poor job following instructions other than parsing. So, to work around this, we prompt gpt-3.5-turbo with the rules we want applied say an extracted field and then go back to gpt-3.5-turbo-016 to parse with chatgpt functions.
Bottom line, every single post requires approximately 10 different prompts to refine the extraction.
yeah I had the same experience, divide and conquer works better than trying to do everything with a single prompt. Are you interested in comparing notes? Feel free to ping me via the email in my profile :)
Product engineering leader helping build great teams. Twenty-plus years of software development. Over the past decade have helped turn around multiple products and teams in crisis.
Currently focusing on helping organizations apply deep learning and LLMs for information extraction from unstructured sources. A recent project in that vain: https://hnjobs.u-turn.dev
When I saw the post about Foundation DB, I remembered the exact same demo running on a cluster of Raspberry Pi instances! Sadly, no memory of it on YouTube.
I feel like I saw something a bit more refined (I recall node statuses aggregated on one cool UI), so this may have been an earlier iteration, but the beginning of the following video has some of what we're talking about: https://youtu.be/Nrb3LN7X1Pg
We use ChatGPT to extract information about jobs and candidates for the past two months. With structured jobs and candidates, you can filter instead of just searching and scrolling.
There's a small write-up on how we went about processing HN Who Is Hiring with ChatGPT:
> Interestingly, those who felt the greatest boost in mood also experienced the biggest drop in heart rate variability.
A drop in HRV isn't generally good. So I check the paper, and found this:
> No significant changes were found in heart rate variability or resting heart rate over the course of the study in either of the groups (Figures 4C and 4D)
There was a reduction respiratory rate for those an increase in daily positive affect. Bottom line, unclear if this particular study points to a positive health outcome other than feeling happier.
Interestingly, change in respiratory rate was negatively correlated with change in daily positive affect (Figure S5; r = - 0.24, p < 0.05), suggesting that participants who showed the highest reduction in respiratory rate also showed the highest daily increase in positive affect over the course of the study (Figure S5).
We support Java through a GitHub Action that generates a stackaid.json file that we consume with your first and second order dependencies. More details here:
https://www.stackaid.us/#stackaid-json
I wasn't aware of SPI. I'd be happy to look into it and reach out to them.
To give some context, if SPI were interested in receiving money for repositories associated with SPI, they would work with the developers to claim those projects on StackAid and associate their Stripe account. SPI's governance model can then decide how to allocate those funds.