Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

First commit and adding Postgresql-Test #28

Closed
wants to merge 2 commits into from

Conversation

FrankUrbach
Copy link
Contributor

Hallo Adrian

Your PostgreSQL-SearchLight-Adapter deserves a little bit of tests. For this I have prepared the file test_postgreSQL.jl. In this file I have done some teardown-statements for more independence of the test cases. Feel free to use this file for your work. Because of differences between MySQL and Postgresql it can be necessary to build some api in SearchLight for the uniform treatment of different kinds of databases. If this is, what you have in mind, please let me know. On the journey through my application I will add some features in the api for different databases.

Best,
Frank

@essenciary
Copy link
Member

@FrankUrbach Thanks for these, this is great! I'm thinking maybe they'd make more sense in the SearchLightPostgresql adapter package? This way each adapter will contain its own tests, instead of including everything in core SearchLight (which can have some generic tests of its own).

@FrankUrbach
Copy link
Contributor Author

Hallo Adrian
That make sense. I would prepare this if I come back from my vacation. What do you think about the integration of the tests for MySQL in the SearchlightMySQL-Adapter? In my opinion the definition of a standard testset which all adapter should fulfill is very useful. Also the cleaning of items done in the database during the tests should be done. First teardowns I’ve made in the tests for the SearchlightPostgres-tests.
Best
Frank

@essenciary
Copy link
Member

@FrankUrbach that would be fantastic! I also believe that would be the best approach (tests with the adapter). SearchLight itself can include some generic unit tests which are not backend related.

@FrankUrbach
Copy link
Contributor Author

FrankUrbach commented Nov 12, 2020

Hi Adrian
I've added some functionality for getting a table_create_migration out of an DataType. In some circumstances it will be a little bit tedious to write all of them by hand. This could be error prone and not that handy as it could be.
I'm very interested in your opinion.
Best
Frank

@FrankUrbach
Copy link
Contributor Author

Hallo Adrian
I've added some more things for getting things in the right direction. Hope you will find the time to read the changes and can commit it to the official branch.

Keep healthy.

Best Frank

Project.toml Outdated
@@ -13,6 +13,9 @@ Millboard = "39ec1447-df44-5f4c-beaa-866f30b4d3b2"
OrderedCollections = "bac558e1-5e72-5ebc-8fee-abe8a469f55d"
Reexport = "189a3867-3050-52da-a836-e630ba90ab69"
SHA = "ea8e919c-243c-51af-8825-aaa63cd721ce"
SafeTestsets = "1bc83da4-3b8d-516f-aca4-4fe02f6d838f"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The testing dependencies need to be added to a separate Project.toml file inside test/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. Then I prepare the tests from within the "tests"-folder and will document this in the right place.

@@ -36,6 +36,28 @@ function new_table_migration(module_name::String, resource::String) :: String
"""
end

function new_table_migration(module_name::String, namesAndTypes::String ,resource::String)::String
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need to put this migration generator just to create a migration for the test. Within the test/ folder we can add a migration/ folder where we add the already created migration. Then at the beginning of the test we perform the so called set-up phase, were we create the migrations table, and run this migration. At the end of the test we perform the so-called tear-down where we run the migrations down and delete the tests table.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please use Julia's recommended style: variable names use underscores, eg names_and_types. Also, commas should not have a space before them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not for test cases. This function is meant to use for development of the whole configuration file from a given type. In a real world y don't want write the same thing twice, one time in a struct and one time for a table migration. With this function and some more it is possible to create the right definition once. If you don't want this in your package, I can remove this there and implement this for my own use somewhere else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I understand now. I think it's a great idea - so basically to generate the migration based on the model's fields. Yeah, that'd be awesome if we can make it work. In this case shouldn't we just pass an optional model type to the new_table_migration function and use the fields and types defined in the model?

@essenciary
Copy link
Member

Thanks @FrankUrbach that's pretty cool stuff! I hope I understood well your intentions re the code and I've provided some feedback. 😊

Copy link
Member

@essenciary essenciary left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I understand now. I think it's a great idea - so basically to generate the migration based on the model's fields. Yeah, that'd be awesome if we can make it work. In this case shouldn't we just pass an optional model type to the new_table_migration function and use the fields and types defined in the model?

@@ -36,6 +36,28 @@ function new_table_migration(module_name::String, resource::String) :: String
"""
end

function new_table_migration(module_name::String, namesAndTypes::String ,resource::String)::String
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, please use Julia's recommended style: variable names use underscores, eg names_and_types. Also, commas should not have a space before them.

src/Generator.jl Outdated
@@ -72,6 +72,18 @@ function new_table_migration(migration_name::Union{String,Symbol}; pluralize::Bo
nothing
end

function new_table_migration( modelType::Type{T}; pluralize::Bool = true):: Nothing where {T<:SearchLight.AbstractModel}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I was saying above, we don't need to create these migrations at test time - we can create much simpler unit tests for migration generation. We can just use an already created migration file.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I'm relative new to Julia. So forgive me my not_Julian_syntax. I come from Smalltalk, where this type of syntax is common. I will review the code and change this.

src/Migration.jl Outdated
@@ -52,6 +52,42 @@ function new_table(migration_name::String, resource::String) :: Nothing
nothing
end

function namesAndTypes(modelType::Type{T}) where {T<:SearchLight.AbstractModel}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comments here:
a) there's a lot of (mostly duplicated code) just to generate the migration. Let's just create it and include it with the test.
b) same comment about function names and variables, camelCase is a big nono in the Julia community.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will be changed.

src/Migration.jl Outdated
@@ -449,6 +485,8 @@ const drop_sequence = remove_sequence

function create_migrations_table end

function drop_migrations_table end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this function should go to an extra SearchLightTestUtil- package. The idea behind this is that I don't want left some residues from test scenario to test scenario. Do every time the same things was a little bit annoying.

@@ -139,6 +139,20 @@ end

# TODO: max(), min(), avg(), mean(), etc

function save(m::T; conflict_strategy = :error, skip_validation = false, skip_callbacks = Vector{Symbol}())::Bool where {T<:Array{A}} where{A<:AbstractModel}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be nice to have a save method for arrays of models, in general. But:
a) duplicated code is a big nono - we simply need to call save(...) for each item.
b) I think we can just say function save(m::Vector{T}; ...) ... where {T<:AbstractModel}
c) if we add a save for arrays, we should also include the corresponding save!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true. I will change to code to your recommendations.

@FrankUrbach
Copy link
Contributor Author

FrankUrbach commented Nov 25, 2020

If you use my whole chain from SearchLight and SearchLightPostgreSQL you will find in the tests exactly this. I don't think we should use an extra struct. This would bring more complexity to the whole thing. Instead of this read more about my thoughts below.
But for now it is only possible to use variables which should be stored in the database. Now I think about a system where it is possible to set a conversation between stored columns in a database and the fields in a struct. I branched my own repo to try this and show what I mean. For now I think about a simple array of tuples in which one side shows the fields of the struct which should be stored and the other side the column_name in which the data should be stored in the database. So it is possible to decouple the model-side from the database-side which is very useful for situations where we have not the chance to live in an environment where we haven't the freedom to determine what is going on in the tables. In order not to make it too difficult for a beginner at this point, the function should return the fields of the struct as default. The rest is straight forward to implement this feature in read, write and update the database. As I said: More soon in code. I will inform you, if I have something ready for review.

@essenciary
Copy link
Member

The idea is not to use an extra struct - we use the model itself, which we define in the application (we can define the model first, then run the migration based on it). We do have mappings already, for each backend, eg: https://github.com/GenieFramework/SearchLightPostgreSQL.jl/blob/cbbd7b5626965d29653edd6987afde9d4f62699e/src/SearchLightPostgreSQL.jl#L23

@FrankUrbach
Copy link
Contributor Author

That's not the the point I meant. The point here is e.g. you have a struct with a field "sizeA" and want store this in the column "size_a" and you don't want this in your model. So you have to tell the model somehow how to store this. This lives beside the type conversation we have for each adapter. As an example:

fields: a , b, c , d, e. where a, c, e are intern fields and b and e should store in the database.
columns(Database): r, z
the mapping for the storage is:
b =>r
e=>z.
A type conversation is not planned because this would lead to not debugable code.

@essenciary
Copy link
Member

SearchLight already ignores fields that don't have a corresponding column by applying a default mapping convention (from field name to column name). Indeed, explicit mapping would be very useful in order to support column names that don't follow SearchLight's conventions (especially legacy code).

@FrankUrbach
Copy link
Contributor Author

"SearchLight already ignores fields that don't have a corresponding column by applying a default mapping convention (from field name to column name)." I thought, this is the the case but if I'm using a struct with fields not included in the set of database columns saving the object go's wrong. Perhaps I'm wrong but in my tests the behavior you described is not there for an insert aka save.

@FrankUrbach
Copy link
Contributor Author

Hallo Adrian
I've added a branch to my fork of SearchLight and SearchLightPostgreSQL called "saveWithoutInterns". For this I added a function storeableFields to SearchLight and imported this in the PostgreSQL-adapter. With this technic it is possible to use it over the import in a module to use it as an interface. If the function is not implemented in a model there is a fallback function in the adapter.It would be very useful If you have the time to review the code bevor I merge it with my master. Any hints and comments are very appreciated.

Best Frank

@FrankUrbach
Copy link
Contributor Author

FrankUrbach commented Dec 2, 2020

Hallo Adrian
Now it's accomplished. In the branch mentioned above the possibility of insert, update and a simple find is implemented. Works like a charm. The patch for the issue #31 is also there.
If you find some time, you can review the code. For now it is only possible to see this in conjunction with the SearchLightpostgreSQL-Adapter with the same branch name. If we have discussed the solution I will transfer the required parts to the SearchLightMySQL-adapter.

Best Frank

@FrankUrbach
Copy link
Contributor Author

Hallo Adrian
As you mentioned I have changed the tests in a way that they run form the runtests.jl within in the tests-folder. It took me some time to figure out one error I introduced by myself. But now all tests are green in the "saveWithoutInterns"-branch.
Best Frank

@FrankUrbach
Copy link
Contributor Author

Hallo Adrian
Now it is possible to store, read and update models where the intern fields not the same as in the database. For now more difficult queries from the database are not included. Any more complex models were appreciated. The next step on my journey is to obtain the possibility to store and read Models with fields containing AbstractModels or arrays of them.
Best Frank

@FrankUrbach
Copy link
Contributor Author

Hallo Adrian
Next step accomplished. In the "saveWithoutInterns"-branch now it is possible to store fields with abstract models or arrays of abstract models. The next step is to read these items. But for the moment it is only possible to insert these items. For the update branch of the save method more work is needed. This should happen in the next days.
Best Frank

@essenciary
Copy link
Member

Hi @FrankUrbach - thanks so much! Sorry, I have been super busy and haven't had the chance to work on the project lately. I'll take a look ASAP.

@FrankUrbach
Copy link
Contributor Author

Hi Adrian
I think, now it is done. In the branch described above you can see a full working spec for saving and finding AbstractModels with fields containing abstract models and with the ability to save the fields with different names in the database. I haven't figured the ability to store in a different table than the original. I theory it should be possible. This all work for now only with the PostgreSQL adapter. The next step it is to port this to the MySQL adapter. Not the nicest job, but it should be done.
Best Frank

@FrankUrbach
Copy link
Contributor Author

FrankUrbach commented Dec 16, 2020

Hallo Adrian
It went faster than I thought. If you go to the branch saveWithoutInterns in my fork of SearchLightMySQL.jl you will find the same tests as in the PostgreSQL-adapter. It brings the same functionality and all tests are green. I think, job done :-)).
For now I will go to work on my application using Genie, Stipple and StippleUI until you have the time to review the code. No pressure. I hope my code will be appreciated by you. But have in mind: I'm a Julia beginner. Not all design decisions will be in the "Julian" way and sometimes maybe not what you imagine for some points. Sometimes we could use the broadcast technic more. But in my humble opinion the code is sometimes better readable without this.
In a nutshell what now is possible:

  • saving models without fields supposed only for intern purposes
  • saving models with different naming of fields and database columns
  • saving models with fields containing abstractModels or arrays of abstract models
  • all this can combined with each other
  • all models stored with the technics described above are also readable from the database

Because of the using recursive algorithms saving and reading model structures with a random depth should be possible.
There are some rough edges in regards of the conventions for submodels id but this should be possible to figure out in the near future.

Best Frank

@FrankUrbach
Copy link
Contributor Author

Hi Adrian

This is my last attempt to ask you if you want incorporate the ideas underlying in this PR. You moved forward to version 2.0 without we discussing anything here. I made more then once a invitation to discuss some ideas which came in my mind during the work on the Postgres adapter. I didn't want sound rood but I think to keep open this PR isn't longer useful. I notice with that you didn't want help in this project so I make my conclusions for myself. Sad but it is how it is.

Best Frank

@FrankUrbach
Copy link
Contributor Author

The pull request is obsolet because of other development directions during the time it was open.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants