These very pages are good, of course, but we need to get this data in some more human-readable form. The page model is easy to store and transaction algorithms, but not easy to access. Pages are inconvenient to read - by bits, so we need a more human-readable method to access the data in pages.
There are pages on the disk that relate to Table A and B. The data on the disk doesn't know anything about the label they belong to. Our optimizer, that is, the database engine that executes our query written in our query language, knows about it.For example, if we consider traditional SQL, then usually such a thing will be called a query plan. Using sequential scan, we will take pages from Table A and Table B - sometimes synchronously, sometimes in turn - depending on the implementation.
Next, we will superimpose JOIN on them, for example, and do something else with the result, and then return the answer to the client.
How is this convenient? Imagine an alternative: you read all this from a Python into your application, and these signs can actually be huge, and the JOIN condition can exclude 90% of this data. You take it out into memory, and then you loop through it, sort oracle database load testing scripts it, and return it. In fact, at each of these steps the scheduler can decide how to make it more profitable. For example, it can choose the JOIN method, which can either consist entirely of cycles, or it can cache one table and attach another to it, and so on. Depending on the method, for example, you jmeter database performance testing may not want to do a full sequential scan, i.e. you may not want to read the entire table, but you may want to read the entire data from the application.
Here everything is invented before you and implemented effectively.