Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Propose a minimal specialization for extract #12

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Gallaecio
Copy link
Contributor

This is a proposal for a specialization of the AsyncClient.request_raw method and its parallel counterpart for the extract endpoint.

It resolves #9, and only implements the 2 specialization features needed for that. But it is meant to set the base line for similar enhancements in the future.

To-do:

  • Agree on the API (naming, initial and future features, forward-compatibility).
  • Tests.
  • Documentation.

@Gallaecio Gallaecio requested review from kmike and BurnzZ May 17, 2022 12:49
Comment on lines 70 to 77
def http_response_body(self): -> bytes:
if hasattr(self, "_http_response_body"):
return self._http_response_body
base64_body = self._api_response.get("httpResponseBody", None)
if base64_body is None:
raise ValueError("API response has no httpResponseBody key.")
self._http_response_body = b64decode(base64_body)
return self._http_response_body
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could later add http_response_text, which would return str, handling decoding.

handle_retries: bool = True,
retrying: Optional[AsyncRetrying] = None,
**kwargs,
) -> Awaitable[ExtractResult]:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Current features, compared to request_raw:

  • Pass top-level arguments instead of a dictionary.
  • May pass url as a positional argument.
  • Allows using Pythonic snake case for top-level query parameters.

Some potential features:

  • Allow a headers parameter, which automatically fills the corresponding API parameters.
  • Allow an outputs parameter, supporting a flag-like way of defining which outputs to enable.

The point of using **kwargs instead of mapping all parameters is forward-compatibility.

return len(self._api_response)

@property
def http_response_body(self) -> Union[bytes|_NotLoaded]:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems there could be a big overlap with data structures defined in https://github.com/scrapinghub/web-poet/blob/master/web_poet/page_inputs/http.py

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

I'm thinking if python-zyte-api should somehow import web-poet to utilize such page_inputs or should the page_inputs themselves be moved outside of web-poet.

Later on, we'd need the functionalities of web_poet.page_inputs.http to process content-encoding when dealing with browserHtml.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ll see you and raise you: scrapy-plugins/scrapy-zyte-api#10 may be in a similar situation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want ExtractResult here to be a subclass of web_poet’s HttpResponse, or expose a compatible interface? Or is the problem only not to reinvent the wheel code-wise? Or do we want ExtractResult to expose httpResponseBody, httpResponseHeaders and browserHtml through 2 HttpResponse objects, or possibly a different, new web_poet object in the case of browserHtml?

Copy link
Member

@BurnzZ BurnzZ May 19, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want ExtractResult here to be a subclass of web_poet’s HttpResponse, or expose a compatible interface? Or is the problem only not to reinvent the wheel code-wise?

I believe it's both. We'd want the ecosystem revolving around Zyte's extraction and crawling be similar in interface (i.e. web-poet, scrapy-poet, python-zyte-api, scrapy-zyte-api, etc).

I'm trying to think if there are some downsides in using web-poet for the Zyte API's client, but I can't think of any. I think it's more beneficial since web-poet would be used behind Zyte API. This means both server and client would use the same dependency promoting overall compatibility.

Or do we want ExtractResult to expose httpResponseBody, httpResponseHeaders and browserHtml through 2 HttpResponse objects, or possibly a different, new web_poet object in the case of browserHtml?

After trying to make httpResponseBody work with Text Responses in scrapy-plugins/scrapy-zyte-api#10, I realized that we'll need to read the headers easily. web_poet.httpResponseHeaders could easily accommodate Zyte API's formatting. For example, if we'd want to determine if a given response is text-based or not, we can check out the Content-Type header. However, Zyte API returns the headers like this:

"httpResponseHeaders": [..., {'name': 'content-type', 'value': 'text/html; charset=UTF-8'}, ...]

To search for this value, we'd need to iterate through the list of header key-value pairs. On the other hand, web-poet would easily have this as:

>>> headers = HttpResponseHeaders.from_name_value_pairs(zyte_api_response["httpResponseHeaders"])
>>> headers.get('Content-Type')
'text/html; charset=UTF-8'

There's a lot of benefits to using the existing features in web-poet similar to this one.

For the browserHtml, I think it would be worth representing it with another class, since web_poet.HttpResponseBody represents bytes.

zyte_api/aio/client.py Show resolved Hide resolved
return len(self._api_response)

@property
def http_response_body(self) -> Union[bytes|_NotLoaded]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

I'm thinking if python-zyte-api should somehow import web-poet to utilize such page_inputs or should the page_inputs themselves be moved outside of web-poet.

Later on, we'd need the functionalities of web_poet.page_inputs.http to process content-encoding when dealing with browserHtml.

@Gallaecio
Copy link
Contributor Author

Gallaecio commented May 26, 2022

So, what about:

class ExtractResult(Mapping):
    browser_html: BrowserHtml
    response: HttpResponse
    response_headers: HttpResponseHeaders

There will be duplicated information (headers, response.headers, browser_html.headers), but that is probably a good tradeoff for a clean, web-poet-based API, and safe in case the API ever supports response headers without response body or browser HTML. Or we could remove headers above, and let users access it through any of the other attributes instead.

As for Scrapy, we could have Response.zyte_api be the ExtractResult object from python-zyte-api.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Simplify extract syntax
3 participants