I'm curious on what the value of a frontend would actually be on this kind of project and where it is in the engineering lifecycle. I've worked in the same space on instrument control software (microfluidics, synthetic biology) and I feel like what you would gain for, as an example, putting a slider on a flow rate is of minimal use to the development of the instrument, especially in an ad-hoc iterative environment.
The engineer who is writing this software and building the system in tandem will be able to express most of their experimental/engineering needs via a CLI tool and something that visualizes instrument state and sensor readings. For example, I've gotten quite a bit of mileage out of throwing Grafana as a graphing interface and then using ipython to essentially have a REPL to control logical subsystems, such as a pump or a motor. This allows you to interrogate the empirical limits of your system without having to spend time centering divs or caring about button placement.
This is only true if the engineer is the intended end user of the system. As you scale out and include other disciplines in the project you do have to consider the user interface as the window into the instrument.
I've played with the automatic GUI element generation from parsing the underlying control code before. I think its a fun bit of meta problem solving, but I've always ended up in a corner that requires some sort of bespoke tool or widget to solve a tricky problem arising from the scientific domain. I think you might be fine if all you are doing is controlling a syringe pump or similar, but as the experimental needs become complicated you'll end up re-writing the control interface as a deliberate experience and toolbox.
I've gone back and forth on this. I'm involved in very early stage development of sensors and instrumentation systems. If anything I create threatens to end up in a real product, there's an entire department of coders who figure out the user-facing side of things.
What I find useful is real-time displays, when I'm still at the stage of manually adjusting and optimizing things, and sometimes a few controls so I don't need three hands in the lab. But on the other hand, dialog after dialog of settings can easily be managed instead with a human readable text file.
Tkinter and Matplotlib actually work well enough for my use. I don't need to make anything look as slick as a management dashboard, or completely user-proof. I've developed a great appreciation for the time and effort (i.e., cost) that go into creating fabulous GUIs. Most makers of sensors and controls who send out a "demo" or "setup" app never get to that level.
In my experience, having a quick and dirty HMI/GUI is a must have during commissioning (initial start-up, tuning, etc.), debugging, and shut down. In all of the industrial automation and building automation projects I've been involved in, there's always been a set of commissioning HMI/GUI which is different to the HMI/GUI used in production.
The author has a YouTube channel and speaks at length about cellular automata and other artificial intelligence. It's a good time! (https://www.youtube.com/@EmergentGarden)
Volta Labs, Inc. | Senior Software Engineer, Director of Software Engineering | Boston, MA | Full-time | https://www.voltalabs.com/join.html
Volta Labs, Inc. is an MIT spin-off that has unlocked powerful new capabilities with our proprietary fluidic technology. Our vision is to make biological automation as agile, scalable, and reliable as digital electronics. To realize this vision we are building a platform technology by engineering the full-stack -- hardware, software, chemistry, and biology. Check the video, it's pretty cool. (https://www.youtube.com/watch?v=z0NBsyhApvU)
We're looking for a Director of Software Engineering and a Senior Software Engineer to help scale up the software side of the house. We do fascinating work with robotics, computer vision, and tying it together via our nascent cloud infrastructure. Help us transform bioautomation into a new paradigm of accessibility and scalability.
Tech Stack: Python, Javascript, C, React, Redux, AWS, the fundamental building blocks of life.
ReadCoor | Hardware Software Engineer and Pipeline Software Engineer | Cambridge, MA | ONSITE, REMOTE | https://www.readcoor.com
ReadCoor is a company developing a platform for DNA sequencing in 3D tissue. We are a multidisciplinary team of biologists, chemists, hardware and software engineers building this technology to assist in the next leap of biological research and discovery.
Why 3D Spatial Sequencing?
Current state of the art DNA sequencing will tell you what raw "source code" you have. Various RNA sequencing technologies and recent single cell techniques will give you "runtime configuration" information. These techniques will tell you what molecules are present in a tissue, but they do not tell you where the molecules are located.
Our sequencing technology maps their location in exquisite detail. We can, for example tell you not just that a virus’s DNA was found in a sample, but also allow you to view the intact tissue and see which individual cells were infected. We apply this technology to all areas of biomedical research including infectious disease, neuroscience, and cancer biology.
Pipeline Software Engineer
Our backend pipeline engineering role is responsible for building and scaling out our ingestion and processing pipeline.
This pipeline uses a lot of python (numpy, scipy, skimage, pandas, dask, zarr to name a few libraries) to process individual tissue samples which range in size from 1-5 TB of images (and growing).
UI Developer
Our UI Developers are responsible for creating our web-facing data visualizations and statistical tooling as well as working with the hardware engineers and biologists to create cutting edge sequencing interfaces.
Our UI team uses JS, focusing primarily on React/Redux and Typescript.
Bioinformatics Developer
Our Bioinformatic Developers are responsible for interfacing with our biologists, primer design, and creating bioinformatic pipelines and analyses to support our three-dimensional spatial sequencing.
Our bioinfomatic team uses python and a suite of AWS services to facilitate
If you are not a Software Engineer, we're also hiring all sorts of scientists!
ReadCoor | Hardware Software Engineer and Pipeline Software Engineer | Cambridge, MA | ONSITE, REMOTE | https://www.readcoor.com
ReadCoor is a company developing a platform for DNA sequencing in 3D tissue. We are a multidisciplinary team of biologists, chemists, hardware and software engineers building this technology to assist in the next leap of biological research and discovery.
Why 3D Spatial Sequencing?
Current state of the art DNA sequencing will tell you what raw "source code" you have. Various RNA sequencing technologies and recent single cell techniques will give you "runtime configuration" information. These techniques will tell you what molecules are present in a tissue, but they do not tell you where the molecules are located.
Our sequencing technology maps their location in exquisite detail. We can, for example tell you not just that a virus’s DNA was found in a sample, but also allow you to view the intact tissue and see which individual cells were infected. We apply this technology to all areas of biomedical research including infectious disease, neuroscience, and cancer biology.
Pipeline Software Engineer
Our backend pipeline engineering role is responsible for building and scaling out our ingestion and processing pipeline.
This pipeline uses a lot of python (numpy, scipy, skimage, pandas, dask, zarr to name a few libraries) to process individual tissue samples which range in size from 1-5 TB of images (and growing).
Hardware Software Engineer
Our Hardware Software Engineer is responsible for writing code that enables our automated sequencing platform.
To accomplish this we use python, C++, and JS to control a complex mutli-axis imaging platform with integrated high-precision fluidics.
I really wanted to apply, but the process was very annoying. It seems redundant to re-type all the information so clearly mentioned in my cv into various forms.
The engineer who is writing this software and building the system in tandem will be able to express most of their experimental/engineering needs via a CLI tool and something that visualizes instrument state and sensor readings. For example, I've gotten quite a bit of mileage out of throwing Grafana as a graphing interface and then using ipython to essentially have a REPL to control logical subsystems, such as a pump or a motor. This allows you to interrogate the empirical limits of your system without having to spend time centering divs or caring about button placement.
This is only true if the engineer is the intended end user of the system. As you scale out and include other disciplines in the project you do have to consider the user interface as the window into the instrument.
I've played with the automatic GUI element generation from parsing the underlying control code before. I think its a fun bit of meta problem solving, but I've always ended up in a corner that requires some sort of bespoke tool or widget to solve a tricky problem arising from the scientific domain. I think you might be fine if all you are doing is controlling a syringe pump or similar, but as the experimental needs become complicated you'll end up re-writing the control interface as a deliberate experience and toolbox.