Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MLTransform #26795

Merged
merged 68 commits into from
Jul 6, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
68 commits
Select commit Hold shift + click to select a range
6c641ec
Initial work on MLTransform and ProcessHandler
AnandInguva May 10, 2023
9c93dc0
Support for containers: List, Dict[str, np.ndarray]
AnandInguva May 15, 2023
89e889d
Add min, max, artifacts for scale_0_to_1
AnandInguva May 16, 2023
5548103
Add more transform functions and artifacts
AnandInguva May 19, 2023
f5d050a
Add generic type annotations
AnandInguva May 19, 2023
6681916
Add unit tests
AnandInguva May 30, 2023
3be1cfd
Add support for saving intermediate results for a transform
AnandInguva May 30, 2023
6caba7e
Add schema to the output PCollection
AnandInguva May 31, 2023
901a74c
Remove MLTransformOutput and return Row instead with schema
AnandInguva May 31, 2023
361e0bb
Convert primitive type to list using a DoFn. Remove FixedLenFeatureSpec
AnandInguva Jun 1, 2023
681d164
Add append_transform to the ProcessHandler
AnandInguva Jun 1, 2023
eac8b3f
Remove param self.has_artifacts, add artifact_location to handler..an…
AnandInguva Jun 2, 2023
011d5d1
Move tensorflow import into the try except catch
AnandInguva Jun 2, 2023
def7eb4
Add type annotations for the data transforms
AnandInguva Jun 2, 2023
1a0a0ed
Add tft test in tox.ini
AnandInguva Jun 5, 2023
2393254
Add step name for TFTProcessHandler
AnandInguva Jun 5, 2023
f25618e
Remove unsupported tft versions
AnandInguva Jun 5, 2023
4256c99
Fix mypy
AnandInguva Jun 5, 2023
df73361
Refactor TFTProcessHandlerDict to TFTProcessHandlerSchema
AnandInguva Jun 5, 2023
4497bb5
Update doc for data processing transforms
AnandInguva Jun 6, 2023
77b3634
Fix checking the typing container types
AnandInguva Jun 6, 2023
baf1ae7
Refactor code
AnandInguva Jun 6, 2023
044f509
Fail TFTProcessHandler on a non-global window PColl
AnandInguva Jun 6, 2023
c312aef
Remove underscore
AnandInguva Jun 7, 2023
68a2529
Remove high level functions
AnandInguva Jun 7, 2023
e6ef468
Add TFIDF
AnandInguva Jun 9, 2023
2be4ba6
Fix tests with new changes[WIP]
AnandInguva Jun 9, 2023
7a290e2
Fix tests
AnandInguva Jun 10, 2023
c2a1fae
Refactor class name to CamelCase and remove kwrags
AnandInguva Jun 10, 2023
21dadb1
use is_default instead of isinstance
AnandInguva Jun 16, 2023
df05169
Remove falling back to staging location for artifact location
AnandInguva Jun 16, 2023
42fd6c4
Add TFIDF tests
AnandInguva Jun 16, 2023
5c6dcb4
Remove __str__
AnandInguva Jun 16, 2023
43d24ad
Refactor skip statement
AnandInguva Jun 16, 2023
618b2fa
Add utils for fetching artifacts on compute and apply vocab
AnandInguva Jun 16, 2023
a814650
Make ProcessHandler internal class
AnandInguva Jun 17, 2023
0a61955
Only run analyze stage when transform_fn(artifacts) is not computed b…
AnandInguva Jun 22, 2023
33f8fb2
Fail if pipeline has non default window during artifact producing stage
AnandInguva Jun 22, 2023
bc22e9f
Add support for Dict, recordbatch and introduce artifact_mode
AnandInguva Jun 23, 2023
4e07f7d
Hide process_handler from user. Make TFTProcessHandler as default
AnandInguva Jun 23, 2023
eeed56c
Refactor few tests
AnandInguva Jun 23, 2023
9eed989
Comment a test
AnandInguva Jun 23, 2023
3453b9f
Save raw_data_meta_data so that it can be used during consume stage
AnandInguva Jun 23, 2023
3e8f198
Refactor code
AnandInguva Jun 23, 2023
e8a3686
Add test on artifacts
AnandInguva Jun 23, 2023
72ea029
Fix imports
AnandInguva Jun 26, 2023
55b04e8
Add tensorflow_metadata to pydocs
AnandInguva Jun 26, 2023
b65ff05
Merge remote-tracking branch 'upstream/master' into mltransform
AnandInguva Jun 26, 2023
00fb944
Fix test
AnandInguva Jun 26, 2023
f11d02b
Add TFIDF to import
AnandInguva Jun 26, 2023
7b2200f
Add basic example
AnandInguva Jun 26, 2023
bca2dda
Remove redundant logging statements
AnandInguva Jun 27, 2023
295a80d
Add test for multiple columns on MLTransform
AnandInguva Jun 29, 2023
1d0b5b1
Add todo about what to do when new process handler is introduced
AnandInguva Jun 29, 2023
64bba5e
Add abstractmethod decorator
AnandInguva Jun 29, 2023
034a066
Edit Error message
AnandInguva Jun 29, 2023
1eef0e7
Update docs, error messages
AnandInguva Jun 29, 2023
4ed94c7
Remove record batch input/output arg
AnandInguva Jun 30, 2023
2e6c5ac
Modify generic types
AnandInguva Jun 30, 2023
bf81d46
Fix import sort
AnandInguva Jun 30, 2023
0860489
Merge remote-tracking branch 'upstream/master' into mltransform
AnandInguva Jun 30, 2023
1dcdaa8
Fix mypy errors - best effort
AnandInguva Jul 5, 2023
bb9336a
Fix tests
AnandInguva Jul 5, 2023
17a4eb1
Add TFTOperation doc
AnandInguva Jul 5, 2023
20f416d
Rename tft_transform to tft
AnandInguva Jul 5, 2023
ba33cb7
Fix hadler_test
AnandInguva Jul 5, 2023
f0c023b
Fix base_test
AnandInguva Jul 5, 2023
a315091
Fix pydocs
AnandInguva Jul 5, 2023
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
118 changes: 118 additions & 0 deletions sdks/python/apache_beam/examples/ml_transform/ml_transform_basic.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,118 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#

"""
This example demonstrates how to use MLTransform.
MLTransform is a PTransform that applies multiple data transformations on the
incoming data.

This example computes the vocabulary on the incoming data. Then, it computes
the TF-IDF of the incoming data using the vocabulary computed in the previous
step.

1. ComputeAndApplyVocabulary computes the vocabulary on the incoming data and
overrides the incoming data with the vocabulary indices.
2. TFIDF computes the TF-IDF of the incoming data using the vocabulary and
provides vocab_index and tf-idf weights. vocab_index is suffixed with
'_vocab_index' and tf-idf weights are suffixed with '_tfidf' to the
original column name(which is the output of ComputeAndApplyVocabulary).

MLTransform produces artifacts, for example: ComputeAndApplyVocabulary produces
a text file that contains vocabulary which is saved in `artifact_location`.
ComputeAndApplyVocabulary outputs vocab indices associated with the saved vocab
file. This mode of MLTransform is called artifact `produce` mode.
This will be useful when the data is preprocessed before ML model training.

The second mode of MLTransform is artifact `consume` mode. In this mode, the
transformations are applied on the incoming data using the artifacts produced
by the previous run of MLTransform. This mode will be useful when the data is
preprocessed before ML model inference.
"""

import argparse
import logging
import tempfile

import apache_beam as beam
from apache_beam.ml.transforms.base import ArtifactMode
from apache_beam.ml.transforms.base import MLTransform
from apache_beam.ml.transforms.tft import TFIDF
from apache_beam.ml.transforms.tft import ComputeAndApplyVocabulary
from apache_beam.ml.transforms.utils import ArtifactsFetcher


def parse_args():
parser = argparse.ArgumentParser()
parser.add_argument('--artifact_location', type=str, default='')
return parser.parse_known_args()


def run(args):
data = [
dict(x=["Let's", "go", "to", "the", "park"]),
dict(x=["I", "enjoy", "going", "to", "the", "park"]),
dict(x=["I", "enjoy", "reading", "books"]),
dict(x=["Beam", "can", "be", "fun"]),
dict(x=["The", "weather", "is", "really", "nice", "today"]),
dict(x=["I", "love", "to", "go", "to", "the", "park"]),
dict(x=["I", "love", "to", "read", "books"]),
dict(x=["I", "love", "to", "program"]),
]

with beam.Pipeline() as p:
input_data = p | beam.Create(data)

# arfifacts produce mode.
input_data |= (
'MLTransform' >> MLTransform(
artifact_location=args.artifact_location,
artifact_mode=ArtifactMode.PRODUCE,
).with_transform(ComputeAndApplyVocabulary(
columns=['x'])).with_transform(TFIDF(columns=['x'])))

# _ = input_data | beam.Map(logging.info)

with beam.Pipeline() as p:
input_data = [
dict(x=['I', 'love', 'books']), dict(x=['I', 'love', 'Apache', 'Beam'])
]
input_data = p | beam.Create(input_data)

# artifacts consume mode.
input_data |= (
MLTransform(
artifact_location=args.artifact_location,
artifact_mode=ArtifactMode.CONSUME,
# you don't need to specify transforms as they are already saved in
# in the artifacts.
))

_ = input_data | beam.Map(logging.info)

# To fetch the artifacts after the pipeline is run
artifacts_fetcher = ArtifactsFetcher(artifact_location=args.artifact_location)
vocab_list = artifacts_fetcher.get_vocab_list()
assert vocab_list[22] == 'Beam'


if __name__ == '__main__':
logging.getLogger().setLevel(logging.INFO)
args, pipeline_args = parse_args()
# for this example, create a temp artifact location if not provided.
if args.artifact_location == '':
args.artifact_location = tempfile.mkdtemp()
run(args)
16 changes: 16 additions & 0 deletions sdks/python/apache_beam/ml/transforms/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
165 changes: 165 additions & 0 deletions sdks/python/apache_beam/ml/transforms/base.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# pytype: skip-file

import abc
from typing import Generic
from typing import List
from typing import Optional
from typing import Sequence
from typing import TypeVar

import apache_beam as beam

__all__ = ['MLTransform', 'ProcessHandler', 'BaseOperation']

TransformedDatasetT = TypeVar('TransformedDatasetT')
TransformedMetadataT = TypeVar('TransformedMetadataT')

# Input/Output types to the MLTransform.
ExampleT = TypeVar('ExampleT')
MLTransformOutputT = TypeVar('MLTransformOutputT')

# Input to the apply() method of BaseOperation.
OperationInputT = TypeVar('OperationInputT')
# Output of the apply() method of BaseOperation.
OperationOutputT = TypeVar('OperationOutputT')


class ArtifactMode(object):
PRODUCE = 'produce'
CONSUME = 'consume'


class BaseOperation(Generic[OperationInputT, OperationOutputT], abc.ABC):
def __init__(self, columns: List[str]) -> None:
"""
Base Opertation class data processing transformations.
Args:
columns: List of column names to apply the transformation.
"""
self.columns = columns

@abc.abstractmethod
def apply(
self, data: OperationInputT, output_column_name: str) -> OperationOutputT:
"""
Define any processing logic in the apply() method.
processing logics are applied on inputs and returns a transformed
output.
Args:
inputs: input data.
"""


class ProcessHandler(Generic[ExampleT, MLTransformOutputT], abc.ABC):
"""
Only for internal use. No backwards compatibility guarantees.
"""
@abc.abstractmethod
def process_data(
self, pcoll: beam.PCollection[ExampleT]
) -> beam.PCollection[MLTransformOutputT]:
"""
Logic to process the data. This will be the entrypoint in
beam.MLTransform to process incoming data.
"""

@abc.abstractmethod
def append_transform(self, transform: BaseOperation):
"""
Append transforms to the ProcessHandler.
"""


class MLTransform(beam.PTransform[beam.PCollection[ExampleT],
beam.PCollection[MLTransformOutputT]],
Generic[ExampleT, MLTransformOutputT]):
def __init__(
self,
*,
artifact_location: str,
artifact_mode: str = ArtifactMode.PRODUCE,
transforms: Optional[Sequence[BaseOperation]] = None):
"""
Args:
artifact_location: A storage location for artifacts resulting from
MLTransform. These artifacts include transformations applied to
the dataset and generated values like min, max from ScaleTo01,
and mean, var from ScaleToZScore. Artifacts are produced and stored
in this location when the `artifact_mode` is set to 'produce'.
Conversely, when `artifact_mode` is set to 'consume', artifacts are
retrieved from this location. Note that when consuming artifacts,
it is not necessary to pass the transforms since they are inherently
stored within the artifacts themselves. The value assigned to
`artifact_location` should be a valid storage path where the artifacts
can be written to or read from.
transforms: A list of transforms to apply to the data. All the transforms
are applied in the order they are specified. The input of the
i-th transform is the output of the (i-1)-th transform. Multi-input
transforms are not supported yet.
artifact_mode: Whether to produce or consume artifacts. If set to
'consume', the handler will assume that the artifacts are already
computed and stored in the artifact_location. Pass the same artifact
location that was passed during produce phase to ensure that the
right artifacts are read. If set to 'produce', the handler
will compute the artifacts and store them in the artifact_location.
The artifacts will be read from this location during the consume phase.
There is no need to pass the transforms in this case since they are
already embedded in the stored artifacts.
"""
# avoid circular import
# pylint: disable=wrong-import-order, wrong-import-position
from apache_beam.ml.transforms.handlers import TFTProcessHandler
# TODO: When new ProcessHandlers(eg: JaxProcessHandler) are introduced,
# create a mapping between transforms and ProcessHandler since
# ProcessHandler is not exposed to the user.
process_handler: ProcessHandler = TFTProcessHandler(
artifact_location=artifact_location,
artifact_mode=artifact_mode,
transforms=transforms) # type: ignore[arg-type]

self._process_handler = process_handler

def expand(
self, pcoll: beam.PCollection[ExampleT]
) -> beam.PCollection[MLTransformOutputT]:
"""
This is the entrypoint for the MLTransform. This method will
invoke the process_data() method of the ProcessHandler instance
to process the incoming data.

process_data takes in a PCollection and applies the PTransforms
necessary to process the data and returns a PCollection of
transformed data.
Args:
pcoll: A PCollection of ExampleT type.
Returns:
A PCollection of MLTransformOutputT type.
"""
return self._process_handler.process_data(pcoll)

def with_transform(self, transform: BaseOperation):
"""
Add a transform to the MLTransform pipeline.
Args:
transform: A BaseOperation instance.
Returns:
A MLTransform instance.
"""
self._process_handler.append_transform(transform)
return self
Loading