Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can set up Postgres with three lines of bash and a Unix socket. It is more work; I wouldn't say it's appreciably more work, and if you need more rigorous data modeling (I frequently do) then it's worth considering if this is the mode in which you'd like to run your data stores (I wouldn't, but latency isn't the most critical thing for stuff I do).


I think it is more than that.

Steps will be:

- sudo apt install postgres (more steps are needed if you are not satisfied with OS default version)

- sudo su postgres

- psql -> create db, user, grant permissions

- modify hba.conf file to change connection permissions

- modify postgresql.conf to change settings to your liking because defaults are very out of touch with modern hardware

Another options would be to build your docker image, but I am not good with this.


Whereas with SQLite it's:

- sudo apt install sqlite3 # you're going to want the client

- sudo chmod myapp db.sqlite3

- sudo chmod myapp . # make sure you have directory write permissions for WAL/journal

- sudo -u myapp sqlite3 db.sqlite3 'PRAGMA journal_mode=WAL;'

Nothing is ever single-command easy in all cases.


I am not familiar with SQLite, and not familiar how engineers embed it to apps, but I believe for many apps with SQLite in distribution you don't need all these steps, it is included as some kind of library and not as standalone server.

I am familiar with Java and H2, you just say in config somewhere: store db in this directory, and you are all set.

I think there is no such route currently for PGSQL.


Sorry, but it's more complex than this. As an example, NPM's `sqlite3` package does not contain the actual library. Neither does Ruby's. You need the platform library for quite a few runtimes and now your single-command future has...multiple commands.

Which isn't bad. This is a pretty low bar to clear.

(All that said, I am of the opinion that separating data storage from application logic is better for operations, for security, and for disaster recovery unless you desperately need to prioritize avoiding network latency above everything else, so this is all pretty moot to me.)


> As an example, NPM's `sqlite3` package does not contain the actual library. Neither does Ruby's.

That seems pretty... odd on their part. I know that Ruby gems in particular can and do include C sources, so there's no reason it couldn't ship sqlite3.c (or even the various .c files that get combined into sqlite3.c); if you're able to install Nokogiri (i.e. you have a usable C toolchain), then you should be able to compile sqlite3.c no problem.

Meanwhile, Python ships with an "sqlite3" module in its standard library (last I checked), so if you have Python installed at all then you almost certainly already have SQLite's library installed.


Nokogiri itself depends on an OS-supplied libxml2, doesn't it? And I think--don't quote me, Python isn't my thing and I only looked briefly--that Python on Ubuntu requires the `python-sqlite` APT package, which depends on both `python` and `libsqlite0`. I think CentOS is the same way? As a sysadmin, I'd expect it to be this way, so I'm not surprised by seeing this.

I think the thinking is that packing The Entire Library rather than your necessary bindings is overkill/duplication/etc etc. - it's the static versus dynamic argument in a sense, even though you may be dynamically linking while you pack your own library. Duplication versus OS-controlled security patching? Admittedly when most deployables end up being Docker containers (for good or for ill) that's less important, but most of these tools are from a distinctly earlier era of deployment tools.


> Nokogiri itself depends on an OS-supplied libxml2, doesn't it?

It does, but it still compiles bindings for it. My point is that if you're able to compile C code at all (including Nokogiri's bindings to libxml2), then strictly requiring some external libsqlite3.so is unnecessary - because it's pretty dang simple to just compile sqlite3.c alongside whatever other C code you're compiling.

> And I think--don't quote me, Python isn't my thing and I only looked briefly--that Python on Ubuntu requires the `python-sqlite` APT package, which depends on both `python` and `libsqlite0`

Right, and my point there is that if you've installed Python, then in doing so you've already installed SQLite (because your package manager did so automatically, because it's a dependency), so you don't need to run an additional command to install SQLite.


> but it's more complex than this. As an example, NPM's `sqlite3` package does not contain the actual library. Neither does Ruby's. You need the platform library for quite a few runtimes and now your single-command future has...multiple commands.

I think usual way to do it is say having shared library file like sqlite.so in some directory or installed through OS package manager, and thin API binding for specific runtime in form of runtime package (e.g. npm).

> you desperately need to prioritize avoiding network latency above everything else, so this is all pretty moot to me

Goal here is easy distribution of embedded DB. Many apps are doing this with SQLite, for example Dropbox client. Many other examples: https://www.sqlite.org/famous.html


I see what you're saying now. But unless I'm very much missing the plot, and everyone else is talking about a web application. The post to which you originally replied even constrained it as such:

> you could conceivably run postgres on your app server

So what you've been saying has been taken in the context of packing sqlite or Postgres next to your web application, which has pretty limited use cases, and reads really weirdly in that light.


Discussion was about "app server" and ops maintenance. We just got used to that if you want to bootstrap/install web/api stack, you need to change tons of knobs and configs and install many libs and software pieces.

But there is another approach: one self contained binary/artifact which can include embedded DB too, and be launched by one command.


As my comment says, you are going to want the stand-alone client though, for a variety of reasons. The psql client usually comes with the postgresql-server, but apps embedding SQLite3 definitely don't come with the stand-alone command-line sqlite3 client.


> you are going to want the stand-alone client though

I don't want stand-alone client for app which customer downloaded from my site.


The official docker image seems quite good and handles a lot of the bootstrapping needs.


Could you be more specific what exactly it handles from the steps above?..

Also, docker adds complexity itself.


I can’t remember the exact command off top of my head but it’s something like:

docker run -it -e POSTGRES_PASSWORD -p 5432:5432 postgres:12

And bam, you have a server


Another cool thing is you can initialize the Postgres db when it starts the container with your own SQL (or sh) scripts to create your schemas, tables, views etc by just copying the scripts into /docker-entrypoint-initdb.d/.

Like:

FROM postgres:14.3

COPY pg/pg-config.sql /docker-entrypoint-initdb.d/000-pg-config.sql

COPY pg/schemas.sql /docker-entrypoint-initdb.d/010-schemas.sql

COPY pg/meddra-tables.sql /docker-entrypoint-initdb.d/020-meddra-tables.sql

# ...

RUN chmod a+r /docker-entrypoint-initdb.d/*

ENV POSTGRES_USER=myapp

ENV POSTGRES_PASSWORD=xxx

ENV POSTGRES_DB=mydb


I tend to do things like (after launching container):

docker exec postgres psql -h localhost -p 5432 -U ${DBADMINUSER} -c "some SQL statement"


There is config change in steps above (pgsql defaults suck), and having docker now, changing them becomes harder.

Also, I usually change hba.conf file too, to disallow outside connections.


Keep in mind the suggestion was a managed product in production but if you really need to customize your local image, you can mount in arbitrary initialization scripts as well.

If you need all this customization then did SQLite really fit the bill in the first place?


> if you really need to customize your local image

I need to change config for prod pgsql instance, because default config performance wise is not necessary suitable for serious prod machines.

> If you need all this customization then did SQLite really fit the bill in the first place?

I am not familiar with SQLite, but somehow familiar with Java H2 DB, which has similar idea. You can embed it and all configs into self contained java jar file together with your app and no moves in host OS is needed.


I only use this for local development personally, we use RDS in Prod




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: