Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Method convertToDatabaseValue of CarbonTypeConverter.php is broken on Oracle Database #12

Open
Jeydolen opened this issue Oct 8, 2024 · 7 comments

Comments

@Jeydolen
Copy link

Jeydolen commented Oct 8, 2024

Carbon version: 3.8.0
Carbon doctrine types version: 3.2.0
PHP Version: 8.3.12

Hi everyone, I'm having some trouble getting Carbon to work with Doctrine on an Oracle database.

When using dates with Oracle, you have to alter the session to use the right format.

Doctrine already does this for us via the InitializeSession.php class. However, when we try to insert dates using Carbon instead of DateTime, we no longer use Doctrine types such as DateTimeType.php, but rather those of carbon-doctrine-types. The latter uses CarbonTypeConverter::convertToDatabaseValue internally to insert the correct values in the database.

The problem is that CarbonTypeConverter is not based on the precision of the database date column and assumes that all databases can store precision in microseconds, which is not the case with Oracle when using the Date column or simply when using the default doctrine settings for date management.

Is there a reason why to not use $platform->getDateTimeFormatString() inside convertToDatabaseValue ?

@kylekatarnls
Copy link
Contributor

Hello,

Did you try \Carbon\Doctrine\DateTimeDefaultPrecision::set(0)?

You can find more complete guide on how to integrate with Doctrine in our doc:
https://carbon.nesbot.com/symfony/

@Jeydolen
Copy link
Author

Jeydolen commented Oct 10, 2024

Nope, it doesn't do the trick unfortunately, I also tried to use #[ORM\Column(type: Types::DATETIME_IMMUTABLE, precision: 0)] but it doesn't work either.

Which is kinda normal if I'm not mistaken as there is no check inside CarbonTypeConverter::convertToDatabaseValue for precision.

I think there should be a check for maximum precision (maybe through CarbonTypeConverter::getMaximumPrecision) but it won't be enough in Oracle's case because of the date format given by Doctrine.

@Jeydolen
Copy link
Author

I might add that something is certain: changing precision inside carbon (through \Carbon\Doctrine\DateTimeDefaultPrecision::set(0)) or inside table column definition can't change anything as CarbonTypeConverter::convertToDatabaseValue is returning date WITH microseconds in any case.

@kylekatarnls
Copy link
Contributor

Correct, in theory Doctrine has $platform->getDateTimeFormatString() for that, but inly SQLServerPlatform actually has the microseconds set. So it actually prevents to get microseconds for all other platform that actually can support it.

@Jeydolen
Copy link
Author

Okay, but is this behavior tested in all rdbms then? I see it's not the case for Oracle, but it would help to know whether a special case needs to be made for Oracle (for example, by overriding doctrine's InitializeSession) or whether storing microseconds is only compatible with SQL Server, for example.

@Jeydolen
Copy link
Author

Jeydolen commented Oct 17, 2024

Hey, little update.

I've searched the Doctrine and Carbon source code to see which formats are used to store dates and, by default, Doctrine stores microseconds only for SQL Server.

MySQL and SQLite may be able to use store them with Doctrine settings (haven't tested), but not explicitly regarding column definitions, and they're not used afterwards during saving.

Whether in the case of Oracle or DB2, there is no possibility of “taking advantage” of microsecond storage by using doctrine declarations. I don't understand why it's mandatory to convert dates to microseconds, especially as in some cases, such as Oracle, it's simply not possible (when using default doctrine settings).

As the documentation says, these are “good practices” and I admit that in an ideal world I'd like to be able to follow them, but I work a lot with data and structures that haven't changed much in 20 years (and therefore don't necessarily use the latest column types such as timestamp).

Also, it seems to be intentional for Doctrine to never use microseconds for optimization reasons. See vendor issues

@kylekatarnls
Copy link
Contributor

Hello,

MySQL and SQLite definitely supports it.

We had an issue open on Doctrine to make it better supported, it's open for quite some times.

It's actually from this thread that I took information about how many digits which platform supports: doctrine/dbal#2873 (comment)

This contributor even sourced his claims with doc links: https://docs.oracle.com/cd/B19306_01/server.102/b14200/sql_elements003.htm#BABGIGCJ

The doc changed and I cannot find back the maximum number of digits supported after coma, but searching for "fractional second" you can see an example with milliseconds, so it must support at least 3.

Why we're talking about "taking advantage", and "good practices" when recommending keeping such precision?

Because at high scale, you can have a lot of events incoming during the same second. And also at some point auto-incremented ID doesn't scale neither, if eveybody is inserting into the same table and they have to queue on a single shared ID generator, it can end up being a bottleneck and badly perform. So sometimes it's require to also load-balance the writing and then you can no longer use the ID to know in which order they've been inserted and sort them if you need to display newest first or something more critical such as a business decision with finer granularity than the second.

What if I'm sure I don't need more than microseconds?

It's actually fine to drop it, and I'm not specialist of Oracle, but I would try to make it possible to work with Oracle, Carbon and be able to choose any precision from 0 to max. We definitely aim to support as much platforms as we can with the less possible friction.

Why default to max-precision rather then second-precision?

If you don't know, it's better to have precision you don't use, then finding out later that you actually needed it and it's lost forever because it never was recorded. Optimization? Yes if you have a huge DB you may think of sacrificing precision for performance. But for both cases, I think scarifying precision should be the conscious choice rather than the default.

Also to be fair about the optimization point: 1-byte per row is not a game-changer, if you don't have any better optimization you can do for your app, you're very lucky.

TL;DR

If we could reproduce the issue in some standalone container, I would try my best to see how it should be actually configured to work and fix our library if there is an incompatibility lying in it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants