Data storage and processing is the mission-critical task of any computer system.
Even if you have a blog on text files on the Internet for 30 years, as some database creators do, this text file is actually a database, only very simple.
Everyone is trying to invent a database. One of the speakers at the conference said: "20 years ago, I wrote my database, but I didn't know it was her!" This trend in the world is very developed. Everybody's trying to do that.
The database is a very handy thing to do with data. Many databases are very old technology. They've been developing for the last half a century, in the 70s there were already databases that worked on the same data optimization principles as now.
These databases are very well and thoughtfully written, so now we can choose a programming language and use a common user-friendly data processing interface. In this way we can process data in a standardized way without fearing that it will be processed in a different way.
At the same time, it is useful to remember that the programming languages are changing: yesterday there was Python 2, today there is Python 3, tomorrow everyone ran to write on Go, the day after tomorrow there is something else. You may have a piece of code that emulates the data manipulation work that the database is supposed to do, and you won't know what to do with it.
In most databases, the interface is very conservative. If you sql optimization take PostgreSQL or Oracle, you can work with some tambourine even with very old versions of the new programming languages - good and great.
But the task is not really the easiest. If we start to bury ourselves in the depths of how we do not "beat" the data, how quickly, productively and, most importantly, so that then you can trust the result, to process them, it turns out that the difficult thing.
If you try to write your simple persistent storage, everything will just be the first 15 minutes. Then the locks and stuff will start and at some point you'll realize, "Oh, why am I doing all this?"