Skip to content

Commit

Permalink
Sklearn decisiontree (#23630)
Browse files Browse the repository at this point in the history
Co-authored-by: umairjavaid <[email protected]>
Co-authored-by: Gadri Ebenezer <[email protected]>
Co-authored-by: nathzi1505 <[email protected]>
Co-authored-by: muzakkirhussain011 <[email protected]>
Co-authored-by: AliTarekk <[email protected]>
Co-authored-by: ivy-branch <[email protected]>
Co-authored-by: Shreyansh Bardia <[email protected]>
Co-authored-by: Vaatsalya <[email protected]>
Co-authored-by: Akshay <[email protected]>
Co-authored-by: ReneFabricius <[email protected]>
Co-authored-by: Vaibhav Deshpande <[email protected]>
Co-authored-by: Vaibhav Deshpande <[email protected]>
Co-authored-by: Jomer Barcenilla <[email protected]>
Co-authored-by: Nripesh Niketan <[email protected]>
Co-authored-by: Kareem Morsy <[email protected]>
Co-authored-by: Lucas Alava Peña <[email protected]>
Co-authored-by: Sai-Suraj-27 <[email protected]>
Co-authored-by: Sulaiman Mutawalli <[email protected]>
Co-authored-by: RashulChutani <[email protected]>
Co-authored-by: Moses Daudu <[email protected]>
Co-authored-by: Ahmed Hisham <[email protected]>
Co-authored-by: AnnaTz <[email protected]>
Co-authored-by: Indraneel kumar <[email protected]>
Co-authored-by: hirwa-nshuti <[email protected]>
Co-authored-by: Javeria-Siddique <[email protected]>
Co-authored-by: Peter Kiprop <[email protected]>
Co-authored-by: Humza Tareen <[email protected]>
Co-authored-by: Nitesh Kesharwani <[email protected]>
Co-authored-by: Bhushan Srivastava <[email protected]>
Co-authored-by: hmahmood24 <[email protected]>
Co-authored-by: Daniel4078 <[email protected]>
Co-authored-by: Mostafa Hani <[email protected]>
Co-authored-by: Mostafa Hany <[email protected]>
Co-authored-by: Vismay Suramwar <[email protected]>
Co-authored-by: Eddy Oyieko <[email protected]>
Co-authored-by: Madjid Chergui <[email protected]>
Co-authored-by: Mohammed Ayman <[email protected]>
Co-authored-by: akshatvishu <[email protected]>
Co-authored-by: MahmoudAshraf97 <[email protected]>
Co-authored-by: Yusha Arif <[email protected]>
Co-authored-by: KHETHAN <[email protected]>
Co-authored-by: khethan123 <[email protected]>
Co-authored-by: NripeshN <[email protected]>
Co-authored-by: Aaryan562 <[email protected]>
Co-authored-by: Ario Zareinia <[email protected]>
Co-authored-by: MahadShahid8 <[email protected]>
Co-authored-by: Dharshannan Sugunan <[email protected]>
Co-authored-by: RickSanchezStoic <[email protected]>
Co-authored-by: Abdurrahman Rajab <[email protected]>
Co-authored-by: Waqar Ahmed <[email protected]>
Co-authored-by: Saeed Ashraf <[email protected]>
Co-authored-by: Guilherme Lucas <[email protected]>
Co-authored-by: Anwaar Khalid <[email protected]>
Co-authored-by: Zaeem Ansari <[email protected]>
Co-authored-by: Muhammad Sameer Khan <[email protected]>
Co-authored-by: Tom Edwards <[email protected]>
Co-authored-by: Pragato Bhattacharjee <[email protected]>
Co-authored-by: Abhay Mahajan <[email protected]>
  • Loading branch information
Show file tree
Hide file tree
Showing 48 changed files with 5,409 additions and 4,932 deletions.
13 changes: 13 additions & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,8 +22,21 @@ Close #

- [ ] Did you add a function?
- [ ] Did you add the tests?
- [ ] Did you run your tests and are your tests passing?
- [ ] Did pre-commit not fail on any check?
- [ ] Did you follow the steps we provided?

<!--
Please mark your PR as a draft if you realise after the fact that your tests are not passing or
that your pre-commit check has some failures.
Here are some relevant resources regarding tests and pre-commit:
https://unify.ai/docs/ivy/overview/deep_dive/ivy_tests.html
https://unify.ai/docs/ivy/overview/deep_dive/formatting.html#pre-commit
-->

### Socials:

<!--
Expand Down
Original file line number Diff line number Diff line change
@@ -1,62 +1,55 @@
name: Check Semantic and welcome new contributors

on:
pull_request_target:
types:
- opened
- edited
- synchronize
- reopened
workflow_call:

permissions:
pull-requests: write

jobs:
semantics:
name: Semantics
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: amannn/[email protected]
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

pr-compliance-checks:
name: PR Compliance Checks
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: mtfoley/[email protected]
with:
body-auto-close: false
protected-branch-auto-close: false
body-comment: >
## Issue Reference
In order to be considered for merging, the pull request description must refer to a
specific issue number. This is described in our
[contributing guide](https://unify.ai/docs/ivy/overview/contributing/the_basics.html#todo-list-issues) and our PR template.
This check is looking for a phrase similar to: "Fixes #XYZ" or "Resolves #XYZ" where XYZ is the issue
number that this PR is meant to address.
welcome:
name: Welcome
runs-on: ubuntu-latest
timeout-minutes: 10
needs: semantics
if: github.event.action == 'opened'
steps:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
pr-message: |-
Congrats on making your first Pull Request and thanks for supporting Ivy! 🎉
Joing the conversation in our [Discord](https://discord.com/invite/sXyFF8tDtm)
Here are some notes to understand our tests:
- We have merged all the tests in one file called \`display_test_results\` job. 👀 It contains the following two sections:
- **Combined Test Results:** This shows the results of all the ivy tests that ran on the PR. ✔️
- **New Failures Introduced:** This lists the tests that are passing on main, but fail on the PR Fork.
Please try to make sure that there are no such tests. 💪
name: Check Semantic and welcome new contributors

on:
pull_request_target:
types:
- opened
- edited
- synchronize
- reopened
workflow_call:

permissions:
pull-requests: write

jobs:
pr-compliance-checks:
name: PR Compliance Checks
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: mtfoley/[email protected]
with:
body-auto-close: false
protected-branch-auto-close: false
body-comment: >
## Issue Reference
In order to be considered for merging, the pull request description must refer to a
specific issue number. This is described in our
[contributing guide](https://unify.ai/docs/ivy/overview/contributing/the_basics.html#todo-list-issues) and our PR template.
This check is looking for a phrase similar to: "Fixes #XYZ" or "Resolves #XYZ" where XYZ is the issue
number that this PR is meant to address.
welcome:
name: Welcome
runs-on: ubuntu-latest
timeout-minutes: 10
if: github.event.action == 'opened'
steps:
- uses: actions/first-interaction@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
pr-message: |-
Congrats on making your first Pull Request and thanks for supporting Ivy! 🎉
Join the conversation in our [Discord](https://discord.com/invite/sXyFF8tDtm)
Here are some notes to understand our tests:
- We have merged all the tests in one file called \`display_test_results\` job. 👀 It contains the following two sections:
- **Combined Test Results:** This shows the results of all the ivy tests that ran on the PR. ✔️
- **New Failures Introduced:** This lists the tests that fails on this PR.
Please make sure they are passing. 💪
Keep in mind that we will assign an engineer for this task and they will look at it based on the workload that they have, so be patient.
4 changes: 2 additions & 2 deletions docs/overview/contributing/error_handling.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Error Handling

This section, "Error Handling" aims to assist you in navigating through some common errors you might encounter while working with the Ivy's Functional API. We'll go through some common errors which you might encounter while working as a contributor or a developer.

#. This is the case where we pass in a dtype to `torch` which is not actually supported by the torch's native framework itself. The function which was
#. This is the case where we pass in a dtype to `torch` which is not actually supported by the torch's native framework itself.

.. code-block:: python
Expand Down Expand Up @@ -64,7 +64,7 @@ This section, "Error Handling" aims to assist you in navigating through some com
E
E You can reproduce this example by temporarily adding @reproduce_failure('6.82.4', b'AXicY2BAABYQwQgiAABDAAY=') as a decorator on your test case
#. This is a similar assertion as stated in point 2 but with torch and ground-truth tensorflow not matching but the matrices are quite different so there should be an issue in the backends rather than a numerical instability here:
#. This is a similar assertion as stated in point 2 but with torch and ground-truth tensorflow not matching but the matrices are quite different so there should be an issue in the backends rather than a numerical instability here.
.. code-block:: python
Expand Down
13 changes: 11 additions & 2 deletions ivy/data_classes/array/experimental/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,14 +123,23 @@ def prelu(
"""
return ivy.prelu(self._data, slope, out=out)

def relu6(self, /, *, out: Optional[ivy.Array] = None) -> ivy.Array:
def relu6(
self,
/,
*,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Array] = None,
) -> ivy.Array:
"""
Apply the rectified linear unit 6 function element-wise.
Parameters
----------
self
input array
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output array, for writing the result to.
It must have a shape that the inputs broadcast to.
Expand All @@ -156,7 +165,7 @@ def relu6(self, /, *, out: Optional[ivy.Array] = None) -> ivy.Array:
>>> print(y)
ivy.array([0., 0., 1., 2., 3., 4., 5., 6., 6.])
"""
return ivy.relu6(self._data, out=out)
return ivy.relu6(self._data, complex_mode=complex_mode, out=out)

def logsigmoid(
self: ivy.Array,
Expand Down
10 changes: 10 additions & 0 deletions ivy/data_classes/container/experimental/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -329,6 +329,7 @@ def static_relu6(
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""
Expand All @@ -351,6 +352,9 @@ def static_relu6(
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Expand Down Expand Up @@ -379,6 +383,7 @@ def static_relu6(
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
complex_mode=complex_mode,
out=out,
)

Expand All @@ -390,6 +395,7 @@ def relu6(
to_apply: Union[bool, ivy.Container] = True,
prune_unapplied: Union[bool, ivy.Container] = False,
map_sequences: Union[bool, ivy.Container] = False,
complex_mode: Literal["split", "magnitude", "jax"] = "jax",
out: Optional[ivy.Container] = None,
) -> ivy.Container:
"""
Expand All @@ -412,6 +418,9 @@ def relu6(
map_sequences
Whether to also map method to sequences (lists, tuples).
Default is ``False``.
complex_mode
optional specifier for how to handle complex data types. See
``ivy.func_wrapper.handle_complex_input`` for more detail.
out
optional output container, for writing the result to. It must have a shape
that the inputs broadcast to.
Expand Down Expand Up @@ -439,6 +448,7 @@ def relu6(
to_apply=to_apply,
prune_unapplied=prune_unapplied,
map_sequences=map_sequences,
complex_mode=complex_mode,
out=out,
)

Expand Down
4 changes: 3 additions & 1 deletion ivy/functional/backends/jax/experimental/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,9 @@ def logit(
return jnp.log(x / (1 - x))


def relu6(x: JaxArray, /, *, out: Optional[JaxArray] = None) -> JaxArray:
def relu6(
x: JaxArray, /, *, complex_mode="jax", out: Optional[JaxArray] = None
) -> JaxArray:
relu6_func = jax.nn.relu6

# sets gradient at 0 and 6 to 0 instead of 0.5
Expand Down
4 changes: 3 additions & 1 deletion ivy/functional/backends/numpy/experimental/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,9 @@ def thresholded_relu(


@_scalar_output_to_0d_array
def relu6(x: np.ndarray, /, *, out: Optional[np.ndarray] = None) -> np.ndarray:
def relu6(
x: np.ndarray, /, *, complex_mode="jax", out: Optional[np.ndarray] = None
) -> np.ndarray:
return np.minimum(np.maximum(x, 0, dtype=x.dtype), 6, out=out, dtype=x.dtype)


Expand Down
7 changes: 6 additions & 1 deletion ivy/functional/backends/paddle/experimental/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,12 @@ def thresholded_relu(
)


def relu6(x: paddle.Tensor, /, *, out: Optional[paddle.Tensor] = None) -> paddle.Tensor:
@with_unsupported_device_and_dtypes(
{"2.5.1 and below": {"cpu": ("bfloat16",)}}, backend_version
)
def relu6(
x: paddle.Tensor, /, *, complex_mode="jax", out: Optional[paddle.Tensor] = None
) -> paddle.Tensor:
if x.dtype in [paddle.float32, paddle.float64]:
return F.relu6(x)
if paddle.is_complex(x):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,7 @@ def thresholded_relu(
return tf.cast(tf.where(x > threshold, x, 0), x.dtype)


@with_unsupported_dtypes({"2.13.0 and below": ("complex",)}, backend_version)
def relu6(x: Tensor, /, *, out: Optional[Tensor] = None) -> Tensor:
def relu6(x: Tensor, /, *, complex_mode="jax", out: Optional[Tensor] = None) -> Tensor:
return tf.nn.relu6(x)


Expand Down
4 changes: 3 additions & 1 deletion ivy/functional/backends/torch/experimental/activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,9 @@ def thresholded_relu(


@with_unsupported_dtypes({"2.0.1 and below": ("float16",)}, backend_version)
def relu6(x: torch.Tensor, /, *, out: Optional[torch.Tensor] = None) -> torch.Tensor:
def relu6(
x: torch.Tensor, /, *, complex_mode="jax", out: Optional[torch.Tensor] = None
) -> torch.Tensor:
return torch.nn.functional.relu6(x)


Expand Down
2 changes: 1 addition & 1 deletion ivy/functional/frontends/jax/nn/non_linear_activations.py
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ def relu(x):

@to_ivy_arrays_and_back
def relu6(x):
res = ivy.relu6(x)
res = ivy.relu6(x, complex_mode="jax")
return _type_conversion_64(res)


Expand Down
2 changes: 2 additions & 0 deletions ivy/functional/frontends/numpy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -565,6 +565,7 @@ def promote_types_of_numpy_inputs(
_sin,
_tan,
_degrees,
_arctan2,
)

from ivy.functional.frontends.numpy.mathematical_functions.handling_complex_numbers import ( # noqa
Expand Down Expand Up @@ -672,6 +673,7 @@ def promote_types_of_numpy_inputs(
arccos = ufunc("_arccos")
arcsin = ufunc("_arcsin")
arctan = ufunc("_arctan")
arctan2 = ufunc("_arctan2")
cos = ufunc("_cos")
deg2rad = ufunc("_deg2rad")
rad2deg = ufunc("_rad2deg")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,32 @@ def _arctan(
return ret


# arctan2


@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
@handle_numpy_casting
@from_zero_dim_arrays_to_scalar
def _arctan2(
x1,
x2,
/,
out=None,
*,
where=True,
casting="same_kind",
order="K",
dtype=None,
subok=True,
):
ret = ivy.atan2(x1, x2, out=out)
if ivy.is_array(where):
ret = ivy.where(where, ret, ivy.default(out, ivy.zeros_like(ret)), out=out)
return ret


@handle_numpy_out
@handle_numpy_dtype
@to_ivy_arrays_and_back
Expand Down
Loading

0 comments on commit 9497bdf

Please sign in to comment.