Re: How to choose a database
Date: Thu, 27 Apr 2023 12:02:41 -0400
Message-ID: <14f7ac04-da88-dbc2-5dc1-698bcb2a5737_at_gmail.com>
On 4/26/23 04:03, Pap wrote:
> Also cockroach DB showing up 100K transactions per seconds in the blog
> below.
What kind of transactions? TPC-C? What do you think, why does nobody publish TPC-C results any more? Are those transactions in any way similar to what you need to do? Did they take any shortcuts that would impact later recoverability? Something like NOARCHIVELOG mode would speed things up significantly. A simple insert into a table, followed by "COMMIT" is a transaction. 100K of such inserts constitutes 100K transactions. If the database file is on NVME drive and if there is enough CPU power, it is entirely possible to do "100K transactions per second", as long as the table is created with the appropriate parameters and transactions are very small and simple. However, my experience tells me that this is never the case. Furthermore, my personal experience tells me that Oracle usually performs better than any PostgreSQL compatible database because of the shared pool and reduced parsing, which Postgres clones do not have. The Postgres advantage is the price, not performance. You will have re-architect the application, which also costs money. The real question with Postgres-like databases is are they good enough? Do they support all the features you need in your application? Is the performance adequate? The goal of such a rewrite is usually to save money, not to gain performance. And that can only be established by doing real-world like tests.
-- Mladen Gogala Database Consultant Tel: (347) 321-1217 https://dbwhisperer.wordpress.com -- http://www.freelists.org/webpage/oracle-lReceived on Thu Apr 27 2023 - 18:02:41 CEST