Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Product types and queries and user/tenant activation #47

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

wz1000
Copy link
Collaborator

@wz1000 wz1000 commented Oct 31, 2016

Goals:

  • Adding activation for user/tenant creation
  • Implement Product API

Done:

  • Type class for DB query monad for finer grained separation of effects
  • Interface for product queries

How queries on products will work

Servant provides an API to accept QueryParams in the URL

type ProductAPI = "products" :> QueryParams "filter" ProductFilter :> Get '[JSON] [Product]

A handler for this API can be implemented like so:

productHandler :: [ProductFilter] -> App Product
productHandler [] = ..
productHandler xs = let x = mconcat xs in ...

This is possible because ProductFilter, like ProductView and ProductComparator compose and thus form a monoid. Now the user can query like this:

/products?filter=title:sometitle&filter=type:physical

and mconcat will compose all the filters supplied to give you a new filter with which we can write the SQL query.

The mechanism is the same for ProductView and ProductComparator. See ProductQuery.hs for more details on how this is handled.

@saurabhnanda
Copy link
Contributor

@wz1000 has the ProductQuery infra been hooked up to any handler? If not, can you please hook it up, so the flow is easier to understand?

@saurabhnanda
Copy link
Contributor

Also, if I understand the overall gist of ProductQuery correctly, it is supposed to parse multiple query params, bearing the same name, in the following format:

paramName=keyName:value

into a list of type [Filter DBProduct], where Filter is Persistent's internal data structure to hold SQL where clauses? (In the current case, the resultant data structure is more complex, but this is the basic idea, right?)

I'm not completely able to understand how you're using the Monoid property of ProductFilter to built a list of filters as you parse subsequent query params. Neither of the parseQueryParam functions you have defined are in any sort of Monad/Monoid. To rephrase, who exactly is calling mappend on the ProductFilter?

@saurabhnanda
Copy link
Contributor

While I will take a little more time to understand the mechanics of this idea, as usual, you seem to have solved this problem very elegantly. Is there any way to abstract this even further so any such API endpoint is much easier to write (basically, reduce the boilerplate)?

@saurabhnanda
Copy link
Contributor

And I assume, this is possible for any DB library. In the case of Opaleye one would parse the URL params to an Opaleye specific type? Btw, I'm unable to figure out what type that would be? https://hackage.haskell.org/package/opaleye-0.5.1.1/docs/Opaleye-Operators.html

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

@saurabhnanda

I'm not completely able to understand how you're using the Monoid property of ProductFilter to built a list of filters as you parse subsequent query params. Neither of the parseQueryParam functions you have defined are in any sort of Monad/Monoid. To rephrase, who exactly is calling mappend on the ProductFilter?

parseQueryParam only defines how to parse a single ProductFilter/ProductView/ProductComparator.

When we use the QueryParams combinator, Servant automatically gives us a list of ProductFilter/ProductView/ProductComparator. We can then call mconcat(which uses mappend) on this list.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

Also, if I understand the overall gist of ProductQuery correctly, it is supposed to parse multiple query params, bearing the same name, in the following format:

paramName=keyName:value

This is the format for ProductFilter. ProductView works a little differently.

The API would be called like this:

 /products?fields=name&fields=description&fields=currency...

Each field=somefield generates a ProductView. A ProductView is essentially a function from a product to a JSON object containing a subset of the products fields. I've defined a new JSON type that is a newtype wrapper of aeson's JSON type, and implemented a monoid instance for it.

What the monoid essentially does is take two JSON objects, and combines them to give a JSON object with the fields of both the objects(if any fields overlap, the first object's fields are kept).

The monoid instance for functions is defined if the result type forms a monoid. Thus, when a function that takes a product and returns a JSON object containing its name is mappended to another function that takes a product and returns a JSON object containing its description, the result is a function that takes a product and returns a JSON object containing both the name and the description.

@saurabhnanda
Copy link
Contributor

Each field=somefield generates a ProductView. A ProductView is essentially a function from a product to a JSON object containing a subset of the products fields. I've defined a new JSON type that is a newtype wrapper of aeson's JSON type, and implemented a monoid instance for it. What the monoid essentially does is take two JSON objects, and combines them to give a JSON object with the fields of both the objects(if any fields overlap, the first object's fields are kept).

Wow! I had missed that completely.

We definitely need both of these wired-up to Servant handlers to complete the story!

@saurabhnanda
Copy link
Contributor

@wz1000 please confirm if you're wiring this up to Servant handlers.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

@saurabhnanda Done

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

@saurabhnanda As persistent does not support joins, the definition for dbGetProductList I've written is pretty inefficient. I'll fix this by using esqueleto in the future.

@saurabhnanda
Copy link
Contributor

A few basic questions:

@saurabhnanda
Copy link
Contributor

At a conceptual level, the pattern/architecture that you seem to be going towards is the following: transform the HTTP request (incoming JSON, query parameters, incoming patch/diff, etc), to functions/data-structures that represent SQL operations as closely as possible.

Are you specifically aiming for this, or things just happen to be lining-up neatly this way?

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

Is ProductComparator even getting used?

Neither ProductView nor ProductComparator are getting used right now.

ProductComparator doesn't even need to fit into the parse a list and then mconcat model. I just put it in because we get the extra power for free. For example, we can now order by costPrice and then comparisionPrice. If the costPrices are equal mconcat will automatically take care of ordering by comparisionPrice.

Does QueryParams result in the params being parsed to a [x](i.e. list) or have you done something special to make Servant behave this way?

This is the way QueryParams behaves by default.

Theoretically, this could be replaced by a foldl' as well, right?

Yes, but by explicitly stating that it is a Monoid, we get to use a nice interface(mconcat) which behaves in a mathematically consistent way, making it easier to reason about.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

At a conceptual level, the pattern/architecture that you seem to be going towards is the following: transform the HTTP request (incoming JSON, query parameters, incoming patch/diff, etc), to functions/data-structures that represent SQL operations as closely as possible.

Only ProductFilter is defined using a persistent specific interface.

@saurabhnanda
Copy link
Contributor

saurabhnanda commented Nov 1, 2016

Only ProductFilter is defined using a persistent specific interface.

I'm sure if you think hard enough you'll be able to state type-safe updates in terms of a Persistent interface, as well :)

To me, both these approaches have something in common (which is significantly different from the standard Rails way of doing things) but I'm unable to put a finger of what exactly that is.

@saurabhnanda
Copy link
Contributor

Neither ProductView nor ProductComparator are getting used right now.

What would it do to the Servant API signatures, if you use ProductView completely?

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

@saurabhnanda

When I took my first stab at writing ProductFilter, I implemented it using a simple Haskell function

newtype ProductFilter = { getFilter :: DBProduct -> All } deriving (Monoid)

where All(defined in Data.Monoid) is the standard Bool monoid over (&&).

Once I had this, realised to use it I would have to load all the products in the database and then filter over that list. That's when I realised that the persistent [Filter DBProduct] type forms pretty much the same monoidal interface as xs ++ ys where xs and ys are [Filter DBProduct] is persistent for and xs ys

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 1, 2016

What would it do to the Servant API signatures, if you use ProductView completely?

Not much. The return type would just change from Product to AppJSON.

@saurabhnanda
Copy link
Contributor

So, is this sprint complete?

Adding activation for user/tenant creation

Tenant creation code still has some undefined, right? Also, storing the activation key in the DB?

Implement Product API

Product creation as well, or just fetch and filter products?

@saurabhnanda
Copy link
Contributor

Yes, but by explicitly stating that it is a Monoid, we get to use a nice interface(mconcat) which behaves in a mathematically consistent way, making it easier to reason about.

Was thinking more about this comment. Can you elaborate how mconcat is better compared to a fold?

Also, do you want to add anything to this PR?

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 3, 2016

Was thinking more about this comment. Can you elaborate how mconcat is better compared to a fold?

First, on the issue of correctness, there is the property that the product of two monoids is also a monoid. The monoid instances for ProductView and ProductFilter flow directly as a result of this. There is only one canonical way to make a monoid out of the product of two monoids. On the other hand, there are infinite ways to write a foldl/foldr, and so infinitely many ways by which to get it wrong.

Second, monoid composition is associative. That means that (a <> b) <> c is equivalent to a <> (b <> c). Using mconcat indicates that we do not care about the evaluation order. The compiler is free to evaluate result however it likes. It can evaluate mconcat [a,b,c,d] as a <> (b <> (c <> d)) or ((a <> b) <> c) <> d) or even something bizarre like (a <> (b <> c)) <> d

@saurabhnanda
Copy link
Contributor

Thanks for the quick primer on Monoids. Any comments on:

Also, do you want to add anything to this PR?

@saurabhnanda
Copy link
Contributor

@wz1000 update, please.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 9, 2016

@saurabhnanda

As of the commit I just pushed, the domain API is pretty much complete, other than photos and product updates.

@saurabhnanda
Copy link
Contributor

Comments:

  • Is there any way to run custom SQL statements in-sync with the Persistent migration? For example, if I want to add a custom CHECK CONSTRAINT on a table, how do I do it?
  • A user can belong to many roles. The current schema restricts a user to belong to a single role only.
  • Why have HasTimestamp instances not been defined for all relevant tables?
  • Shouldn't dbUpdateTenant return the updated tenant record back? Shouldn't activateTenant return the updated tenant record back? Is there an efficient version of INSERT... RETURNING or UPDATE...RETURNING in Persistent?
    • Shouldn't all DB APIs inserting/updating DB rows, return the updated/inserted rows?
  • Any way to write a wrapper on insert and update to make them take care of createdAt and updatedAt fields automagically? (I see you already have applyUpdate, but that's not being used everywhere)
  • I removed the requirePermission (EditUser uid) expression from dbUpdateUsers, and the code compiled! Is there a way to make the code compilation fails if someone forgets to put a call to requirePermission?
  • Is it possible to model the Product in such a way that we don't need the following runtime checks? Basically make illegal states unrepresentable?
     case piType of
       Phys -> when (any (\VariantI{..} -> isNothing viWeightInGrams
                                        || isNothing viWeightDisplayUnit)
                         piVariants) $
                 throwError PhysicalProductFieldsMissing
       Dig -> when (any (\VariantI{..} -> isJust viWeightInGrams
                                       || isJust viWeightDisplayUnit)
                         piVariants) $
                 throwError DigitalProductExtraFields

(contd)

Copy link
Contributor

@saurabhnanda saurabhnanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Need some commentary on OperationT and TransactionT

lift $ runExceptT $ do
time <- liftIO getCurrentTime
when (null piVariants) $
throwError EmptyVariantList
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible to avoid this runtime error by using length-restricted lists, perhaps?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can make the JSON parsing fail instead of throwing the error in createProduct

time <- liftIO getCurrentTime
when (null piVariants) $
throwError EmptyVariantList
case piType of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible to avoid this runtime check by making illegal states unrepresentable in our domain?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like the above, all we can do is shift failure to the parsing of the JSON instead of product creation.

import GHC.Generics
import Data.Aeson

newtype Price = Price { intPrice :: Int }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This type should map to NUMERIC in the DB, which has the ability to represent fractionals without loss of precision (basically, it's not a standard float).

let urlSlug' = fromMaybe (sluggify piName)
piURLSlug
urlSlug <- lift $ makeUnique urlSlug'
let dbProd = DBProduct { _dBProductAdvertisedPrice = advertisedPrice
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Possible to use some generic programming to get rid of this boilerplate in all create APIs in our domain?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of anything, as long the representation of the input is distinct from the representation in the database.

variants <- lift $ selectList [DBVariantProductID ==. pid] []
return $ Product (Entity pid prod) variants

dbGetProductList :: MonadIO m => ProductFilter -> OperationT (TransactionT m) [Product]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be an easier way to write this function!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is wrong with this?

go s _ = return s
s' <- lift $ foldM go s (roleCapabilities role)
if null s'
runOperation :: (MonadIO m)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need some documentation on what's going on here.

Copy link
Collaborator Author

@wz1000 wz1000 Nov 11, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

runOperation basically takes a permission protected operation, and a user, and returns the result of the operation only if the user has the required permissions to run the operation. Otherwise it throws an error describing which permissions are necessary.

It does so by first generating the set of permissions required to perform the operation. Then by iterating over the capabilities of the user, it removes the permissions that the user has. If any permissions are leftover, it returns an error. Otherwise, it returns the result of the operation.

This ensures that the server never has access to the results of db API call that the currently logged on user shouldn't be able to access.

@saurabhnanda
Copy link
Contributor

I'm missing something in the updation infrastructure again. Why is the following erroring out? What exactly does one need to pass to an Updater '[x]?

:t (dbUpdateUser (toSqlKey 10) (\u -> u & userFirstName .~ "Saurabh")) 

<interactive>:1:30: error:
    • Couldn't match expected type ‘Updater
                                      '[Types.HasHumanName, Types.HasContactDetails]’
                  with actual type ‘UserBase userType00 -> UserBase userType00’
    • The lambda expression ‘\ u -> u & userFirstName .~ "Saurabh"’
      has one argument,
      but its type ‘UserUpdater’ has none
      In the second argument of ‘dbUpdateUser’, namely
        ‘(\ u -> u & userFirstName .~ "Saurabh")’
      In the expression:
        (dbUpdateUser
           (toSqlKey 10) (\ u -> u & userFirstName .~ "Saurabh"))

@saurabhnanda
Copy link
Contributor

Also, just thinking aloud, if we have the Updater x infra working properly, do we really need the *Input pattern (eg. UserInput, TenantInput, etc)? Can't we apply the same principles and get a Creator x infra in place?

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

@saurabhnanda

I'm missing something in the updation infrastructure again. Why is the following erroring out? What exactly does one need to pass to an Updater '[x]?

You forgot to wrap the function in a U constructor. Also, userFirstNames type is too monomorphic. Instead, firstName should be used.

:t (dbUpdateUser (toSqlKey 10) (U $ \u -> u & firstName .~ "Saurabh")) 

Also, the lambda is pretty redundant in this case. You can simply write

:t dbUpdateUser (toSqlKey 10) (U $ firstName .~ "Saurabh") 

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

@saurabhnanda

I removed the requirePermission (EditUser uid) expression from dbUpdateUsers, and the code compiled! Is there a way to make the code compilation fails if someone forgets to put a call to requirePermission?

Not really, since OperationT m is a monad. All monads have return defined. Consider return "something" :: OperationT IO String. The only "reasonable" definition of return creates an OperationT m which requires no permissions.

Since the compiler has no way to check if the permissions you've required for any given operation are actually the permissions that operation requires, this shouldn't be that big of a tradeoff.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

Why have HasTimestamp instances not been defined for all relevant tables?

HasTimestamp is only really required for updates, and I haven't gotten around to writing update operations for all the entities. Once I get to that, the compiler will force me to define the relevant instances.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

Is there any way to run custom SQL statements in-sync with the Persistent migration? For example, if I want to add a custom CHECK CONSTRAINT on a table, how do I do it?

Persistent provides rawSql that you can run at migration time.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

Shouldn't all DB APIs inserting/updating DB rows, return the updated/inserted rows?

Thats just the simple matter of replacing update with updateGet.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

Is it possible to model the Product in such a way that we don't need the following runtime checks? Basically make illegal states unrepresentable?

We need runtime checks somewhere. We can just shift the responsibility to the JSON parser instead.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

Any way to write a wrapper on insert and update to make them take care of createdAt and updatedAt fields automagically? (I see you already have applyUpdate, but that's not being used everywhere)

It is possible, we just have to add a persistUpdatedAtField and persistCreatedAtField :: EntityField s UTCTime to HasTimestamp.

@saurabhnanda
Copy link
Contributor

Regarding removal of runtime errors:

  • I feel some checks should be moved to JSON parsing. Isn't that the "Haskell way" of doing things? Ensure data sanity at system boundaries -- deal with perfect domain models internally.
  • The products table was purposely modelled to represent two types of products (digital and physical) with some fields that are different in both. Shouldn't this be modelled in a better way in our domain to make illegal states unrepresentable? Example:
    • data Product = PhysicalProduct{...} | DigitalProduct {...}
    • Or:
data DifferentProductFields = DigitalProductFields{...} | PhysicalProductFields{...}
data Product = Product{differentFields :: DifferentProductFields, ...}

Is there any way to run custom SQL statements in-sync with the Persistent migration? For example, if I want to add a custom CHECK CONSTRAINT on a table, how do I do it?

Persistent provides rawSql that you can run at migration time.

Possible to add the relevant CHECK CONSTRAINTS for the sake of completeness?

Shouldn't all DB APIs inserting/updating DB rows, return the updated/inserted rows?

Thats just the simple matter of replacing update with updateGet.

Can you do that for the sake of completeness and a better API surface? Also, can you check if updateGet actually maps to UPDATE...RETURNING or is an UPDATE followed by a SELECT (hence inefficient).

Any way to write a wrapper on insert and update to make them take care of createdAt and updatedAt fields automagically? (I see you already have applyUpdate, but that's not being used everywhere)

It is possible, we just have to add a persistUpdatedAtField and persistCreatedAtField :: EntityField s UTCTime to HasTimestamp.

Possible to implement, and close?

@saurabhnanda
Copy link
Contributor

I removed the requirePermission (EditUser uid) expression from dbUpdateUsers, and the code compiled! Is there a way to make the code compilation fails if someone forgets to put a call to requirePermission?

Not really, since OperationT m is a monad. All monads have return defined. Consider return "something" :: OperationT IO String. The only "reasonable" definition of return creates an OperationT m which requires no permissions.

Since the compiler has no way to check if the permissions you've required for any given operation are actually the permissions that operation requires, this shouldn't be that big of a tradeoff.

I was wondering if it is worthwhile to have a SecureDB "composable action" (not a monad, and definitely not a monad-transformer), which allows easy chaining of actions in SecureDB, but which need the permission to be "unwrapped" to a regular DBOperationWithPermissionT monad?

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

@saurabhnanda

The products table was purposely modelled to represent two types of products (digital and physical) with some fields that are different in both. Shouldn't this be modelled in a better way in our domain to make illegal states unrepresentable?

I feel some checks should be moved to JSON parsing. Isn't that the "Haskell way" of doing things? Ensure data sanity at system boundaries -- deal with perfect domain models internally.

Yeah, but right now we have a flat incoming data structure(JSON) and a flat target DB representation. Haskell is simply translating between the two representations. No real computations are performed with those data structures in Haskell. I felt it was overkill to translate to an intermediate "type-safe" haskell representation, only to immediately write it out to a flat DB representation.

@saurabhnanda
Copy link
Contributor

Yeah, but right now we have a flat incoming data structure(JSON) and a flat target DB representation. Haskell is simply translating between the two representations. No real computations are performed with those data structures in Haskell. I felt it was overkill to translate to an intermediate "type-safe" haskell representation, only to immediately write it out to a flat DB representation.

Well, we are trying to write a real-life webapp without actually writing a full-fledged real-life webapp :) So, while it may seem that it's overkill -- discovering the min-viable architecture for a large webapp is the overall goal we're targeting. Unless you feel, that this would be overkill even for a full-specced out, large webapp.

@saurabhnanda
Copy link
Contributor

@wz1000 any thoughts on #47 (comment) ?

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

@saurabhnanda

@wz1000 any thoughts on #47 (comment)?

While haskell provides excellent support for manipulating records(via libraries like lens), record creation is still a major pain. I've actually thought about this a lot, and there are three approaches I could think of

  • Using lazy record fields along with undefined to build records. The major drawback to this is, well, you can't tell if a field is still undefined. If you forget to fill in a field, you wont know until your DB write crashes.
  • Using an extensible record library like vinyl or hlist. This doesn't map well to persistent, and extensible record libraries aren't nearly as well known and used in the Haskell ecosystem as lens.

It should be possible to build a more lightweight, specialised system for our purposes that is also nice to use, but it would require a heavy use of type level machinery.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

Well, we are trying to write a real-life webapp without actually writing a full-fledged real-life webapp :) So, while it may seem that it's overkill -- discovering the min-viable architecture for a large webapp is the overall goal we're targeting. Unless you feel, that this would be overkill even for a full-specced out, large webapp.

When choosing a representation for your data, the main consideration is the kinds of operations you expect to perform on your data. If you are mainly performing SQL reads/writes, the optimal representation would be pretty close to the DB row. If we expect some haskell operations to be performed on products, the representation we use will evolve accordingly(As it will when I get around to implementing product updates, with permissions and all)

In short, its hard to choose the optimal representation for some data without knowing in advance what kinds of manipulations you expect to perform with that data.

@saurabhnanda
Copy link
Contributor

When choosing a representation for your data, the main consideration is the kinds of operations you expect to perform on your data. If you are mainly performing SQL reads/writes, the optimal representation would be pretty close to the DB row. If we expect some haskell operations to be performed on products, the representation we use will evolve accordingly(As it will when I get around to implementing product updates, with permissions and all)

So, you want to tackle this when you handle product updates? Or push this to next sprint?

On the UI side, we're trying to share models with the server. So, even if we feel that the server itself doesn't require a complex domain model, I'm sure there are advantages from a UI perspective.

@saurabhnanda
Copy link
Contributor

It should be possible to build a more lightweight, specialised system for our purposes that is also nice to use, but it would require a heavy use of type level machinery.

What about a Creator x on the lines of an Updater x? The incoming JSON is parsed into a Creator value, which is just a bunch of call to a set of restricted lens setters?

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

What about a Creator x on the lines of an Updater x? The incoming JSON is parsed into a Creator value, which is just a bunch of call to a set of restricted lens setters?

Again, haskell provides little to no machinery for incrementally building up records. To do this, some kind of extensible record library would have to be used.

To write Updater, I relied on Haskell's excellent support for polymorphism and function composition. Lenses can get values or modify existing values. Keyword there is "existing". If the field doesn't already exist, a lens can't set it.

@saurabhnanda
Copy link
Contributor

Keyword there is "existing". If the field doesn't already exist, a lens can't set it.

Default type-class to the rescue? The Creator x starts off with Default x and applies a bunch of setters on it.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 11, 2016

@saurabhnanda

Still, there is no way to tell if all the fields have been set to a non default value. And you don't want things like the product name or price to be a default value.

@saurabhnanda
Copy link
Contributor

Still, there is no way to tell if all the fields have been set to a non default value. And you don't want things like the product name or price to be a default value.

So, here is a practical flow:

  • Some fields will have default vvalues in the default instance because the incoming JSON can omit specifying them, I.e. true default behaviour. Non-omittable fields will be undefined.
  • The incoming JSON will go through a bunch of validations while being parsed, typically via digestive functors. If any non-omittable field is missing, ideally the JSON should not even be parsed and should result in a 400 reaponse. A correct implementation would therefore prevent undefined fields from being passed down to the domain API.
  • However, an incorrect implementation where a validation has not been added, might pass down an undefined field to the domain API. This can be tackled in two ways. Either we write a generic JSON parsing function who's API signature forces the programmer to do the right thing. Or, the domain APIs taken records tagged by a phantom type, eg User Validated. And we have a validation function which takes x -> x Validated and ensures that nothing is undefined.

Broad ideas, I know. But is this something that cant be done ? Will help us get rid of incoming JSON types, where the only reason they exist is because some underlying columns need to be "protected"

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 12, 2016

The incoming JSON will go through a bunch of validations while being parsed, typically via digestive functors

Digestive functors present an applicative interface. Applicatives are great for parsing context-free grammars, but our product JSON is described by a context-sensitive grammar as the presence or absence of the weight fields depends on the result of parsing the product type field. So, in order to parse this we need the full power of monadic parsers.

@saurabhnanda
Copy link
Contributor

Do you know of any monadic parsers? Does this mean that digestive functors
are rendered useless for any JSON where the validation of one field depends
on the value of another field?

On 12 Nov 2016 1:44 pm, "wz1000" [email protected] wrote:

The incoming JSON will go through a bunch of validations while being
parsed, typically via digestive functors

Digestive functors present an applicative interface. Applicative are great
for parsing context-free grammars, but our product JSON is described by a
context-sensitive grammar as the presence or absence of the weight fields
depends on the result of parsing the product type field. So, in order to
parse this we need the full power of monadic parsers.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#47 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AABu0ckRNb95Y4AJhqjO3B5Vkx8rCg-Bks5q9XVTgaJpZM4KlChX
.

@wz1000
Copy link
Collaborator Author

wz1000 commented Nov 12, 2016

@saurabhnanda

Most parsers, including Aeson's Parser type, as well as parsec, attoparsec, and most other parser combinator libraries for haskell provide a monadic interface as well as an applicative interface.

Applicative parsing is preferred wherever possible because applicatives can be analysed statically, and also give rise to much more efficient parsers.

Does this mean that digestive functors are rendered useless for any JSON where the validation of one field depends on the value of another field?

Yes, unless you spilt the type into two, one for Digital and one for Physical products. Note that this doesn't necessarily mean that you have to use two separate records, as in Haskell this can be achieved by indexing the type with a product type parameter

data Product (type :: ProductType) where ... 

However, to parse this type you would have to try parsing both types

case parseProduct text :: Maybe (Product Physical) of
  Just p -> ...
  Nothing -> case parseProduct text :: Maybe (Product Digital) of
               Just p -> ...
               Nothing -> ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants