Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise less equal #6720

Merged
merged 14 commits into from
Aug 4, 2021
Merged
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 20 additions & 17 deletions docs/ops/comparison/LessEqual_1.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,48 +4,50 @@

**Category**: Comparison binary operation

**Short description**: *LessEqual* performs element-wise comparison operation with two given tensors applying multi-directional broadcast rules.
**Short description**: *LessEqual* performs element-wise comparison operation with two given tensors applying multi-directional broadcast rules specified in the *auto_broadcast* attribute.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove word "multi-directional"


**Detailed description**
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.

After broadcasting *LessEqual* does the following with the input tensors *a* and *b*:

\f[
o_{i} = a_{i} <= b_{i}
\f]

**Attributes**:

* *auto_broadcast*

* **Description**: specifies rules used for auto-broadcasting of input tensors.
* **Range of values**:
* *none* - no auto-broadcasting is allowed, all input shapes should match
* *numpy* - numpy broadcasting rules, aligned with ONNX Broadcasting. Description is available in <a href="https://github.com/onnx/onnx/blob/master/docs/Broadcasting.md">ONNX docs</a>.
* *none* - no auto-broadcasting is allowed, all input shapes should match,
* *numpy* - numpy broadcasting rules, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md),
* *pdpd* - PaddlePaddle-style implicit broadcasting, description is available in [Broadcast Rules For Elementwise Operations](../broadcast_rules.md).
* **Type**: string
* **Default value**: "numpy"
* **Required**: *no*

**Inputs**

* **1**: A tensor of type *T*. **Required.**
* **2**: A tensor of type *T*. **Required.**
* **1**: A tensor of type *T* and arbitrary shape. **Required.**
* **2**: A tensor of type *T* and arbitrary shape. **Required.**

**Outputs**

* **1**: The result of element-wise comparison operation. A tensor of type boolean.
* **1**: The result of element-wise comparison operation applied to the input tensors. A tensor of type **boolean** and shape equal to broadcasted shape of two inputs.

**Types**

* *T*: arbitrary supported type.

**Detailed description**
Before performing arithmetic operation, input tensors *a* and *b* are broadcasted if their shapes are different and `auto_broadcast` attributes is not `none`. Broadcasting is performed according to `auto_broadcast` value.

After broadcasting *LessEqual* does the following with the input tensors *a* and *b*:

\f[
o_{i} = a_{i} <= b_{i}
\f]

**Examples**

*Example 1*
*Example 1: no broadcast*

```xml
<layer ... type="LessEqual">
<data auto_broadcast="none"/>
<input>
<port id="0">
<dim>256</dim>
Expand All @@ -65,9 +67,10 @@ o_{i} = a_{i} <= b_{i}
</layer>
```

*Example 2: broadcast*
*Example 2: numpy broadcast*
```xml
<layer ... type="LessEqual">
<data auto_broadcast="numpy"/>
<input>
<port id="0">
<dim>8</dim>
Expand Down
115 changes: 115 additions & 0 deletions docs/template_plugin/tests/functional/op_reference/less_eq.cpp
Original file line number Diff line number Diff line change
@@ -0,0 +1,115 @@
// Copyright (C) 2018-2021 Intel Corporation
// SPDX-License-Identifier: Apache-2.0
//

#include <gtest/gtest.h>

#include <ie_core.hpp>
#include <ie_ngraph_utils.hpp>
#include <ngraph/ngraph.hpp>
#include <shared_test_classes/base/layer_test_utils.hpp>
#include <tuple>

#include "base_reference_test.hpp"

using namespace ngraph;
using namespace InferenceEngine;
using namespace reference_tests;

namespace {
struct LessEqualParams {
template <class IT, class OT>
LessEqualParams(const ngraph::PartialShape& input_shape1, const ngraph::PartialShape& input_shape2 , const ngraph::element::Type& iType,
const ngraph::element::Type& oType, const std::vector<IT>& iValues1, const std::vector<IT>& iValues2, const std::vector<OT>& oValues)
: pshape1(input_shape1), pshape2(input_shape2), inType(iType), outType(oType), inputData1(CreateBlob(iType, iValues1)),
inputData2(CreateBlob(iType, iValues2)), refData(CreateBlob(oType, oValues)) {}
ngraph::PartialShape pshape1;
ngraph::PartialShape pshape2;
ngraph::element::Type inType;
ngraph::element::Type outType;
InferenceEngine::Blob::Ptr inputData1;
InferenceEngine::Blob::Ptr inputData2;
InferenceEngine::Blob::Ptr refData;
};

class ReferenceLessEqualLayerTest : public testing::TestWithParam<LessEqualParams>, public CommonReferenceTest {
public:
void SetUp() override {
auto params = GetParam();
function = CreateFunction(params.pshape1, params.pshape2, params.inType, params.outType);
inputData = {params.inputData1, params.inputData2};
refOutData = {params.refData};
}
static std::string getTestCaseName(const testing::TestParamInfo<LessEqualParams>& obj) {
auto param = obj.param;
std::ostringstream result;
result << "inpt_shape1=" << param.pshape1 << "_";
result << "inpt_shape2=" << param.pshape2 << "_";
result << "iType=" << param.inType << "_";
result << "oType=" << param.outType;
return result.str();
}

private:
static std::shared_ptr<Function> CreateFunction(const PartialShape& input_shape1, const PartialShape& input_shape2, const element::Type& input_type,
const element::Type& expected_output_type) {
const auto in = std::make_shared<op::Parameter>(input_type, input_shape1);
const auto in2 = std::make_shared<op::Parameter>(input_type, input_shape2);
const auto LessEqual = std::make_shared<op::v1::LessEqual>(in, in2);
return std::make_shared<Function>(NodeVector {LessEqual}, ParameterVector {in, in2});
}
};

TEST_P(ReferenceLessEqualLayerTest, CompareWithHardcodedRefs) {
Exec();
}

template <element::Type_t IN_ET>
std::vector<LessEqualParams> generateLessEqualParams(const ngraph::element::Type& type) {
using T = typename element_type_traits<IN_ET>::value_type;
std::vector<LessEqualParams> lessEqParams {
// 1D // 2D // 3D // 4D
LessEqualParams(ngraph::PartialShape {2, 2}, ngraph::PartialShape {2, 2}, type, ngraph::element::boolean,
std::vector<T> {0, 12, 23, 0},
std::vector<T> {0, 12, 23, 0},
std::vector<char> {1, 1, 1, 1}),
LessEqualParams(ngraph::PartialShape {2, 3}, ngraph::PartialShape {2, 3}, type, ngraph::element::boolean,
std::vector<T> {0, 6, 45, 1, 21, 21},
std::vector<T> {1, 18, 23, 1, 19, 21},
std::vector<char> {1, 1, 0, 1, 0, 1}),
LessEqualParams(ngraph::PartialShape {1}, ngraph::PartialShape {1}, type, ngraph::element::boolean,
std::vector<T> {53},
std::vector<T> {53},
std::vector<char> {1}),
LessEqualParams(ngraph::PartialShape {2, 4}, ngraph::PartialShape {2, 4}, type, ngraph::element::boolean,
std::vector<T> {0, 6, 25, 0, 11, 5, 11, 8},
std::vector<T> {1, 12, 23, 0, 10, 5, 13, 8},
std::vector<char> {1, 1, 0, 1, 0, 1, 1, 1}),
LessEqualParams(ngraph::PartialShape {3, 1, 2}, ngraph::PartialShape {1, 2, 1}, type, ngraph::element::boolean,
std::vector<T> {2, 1, 4, 1, 3, 1},
std::vector<T> {1, 1},
std::vector<char> {0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, 1}),
LessEqualParams(ngraph::PartialShape {2, 1, 2, 1}, ngraph::PartialShape {1, 2, 1}, type, ngraph::element::boolean,
std::vector<T> {2, 1, 4, 1},
std::vector<T> {1, 1},
std::vector<char> {0, 1, 0, 1})};
return lessEqParams;
}

std::vector<LessEqualParams> generateLessEqualCombinedParams() {
const std::vector<std::vector<LessEqualParams>> LessEqualTypeParams {generateLessEqualParams<element::Type_t::f32>(ngraph::element::f32),
generateLessEqualParams<element::Type_t::f16>(ngraph::element::f16),
generateLessEqualParams<element::Type_t::i32>(ngraph::element::i32),
generateLessEqualParams<element::Type_t::u32>(ngraph::element::u32),
generateLessEqualParams<element::Type_t::u8>(ngraph::element::boolean)};
std::vector<LessEqualParams> combinedParams;

for (const auto& params : LessEqualTypeParams) {
combinedParams.insert(combinedParams.end(), params.begin(), params.end());
}
return combinedParams;
}

INSTANTIATE_TEST_SUITE_P(smoke_LessEqual_With_Hardcoded_Refs, ReferenceLessEqualLayerTest, ::testing::ValuesIn(generateLessEqualCombinedParams()),
ReferenceLessEqualLayerTest::getTestCaseName);
} // namespace
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,7 @@
'HSwish-4',
'HardSigmoid-1',
'Interpolate-4',
'LessEqual-1'
'LRN-1',
'LSTMCell-4',
'LSTMSequence-5',
Expand Down
67 changes: 0 additions & 67 deletions ngraph/test/backend/comparison.in.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -151,70 +151,3 @@ NGRAPH_TEST(${BACKEND_NAME}, less)
handle->call_with_validate({result}, {a, b});
EXPECT_EQ((vector<char>{0, 0, 1, 0, 1, 0, 0, 1}), read_vector<char>(result));
}

NGRAPH_TEST(${BACKEND_NAME}, lesseq)
{
Shape shape{2, 2, 2};
auto A = make_shared<op::Parameter>(element::f32, shape);
auto B = make_shared<op::Parameter>(element::f32, shape);
auto f = make_shared<Function>(make_shared<op::v1::LessEqual>(A, B), ParameterVector{A, B});

auto backend = runtime::Backend::create("${BACKEND_NAME}");

// Create some tensors for input/output
auto a = backend->create_tensor(element::f32, shape);
copy_data(a, vector<float>{1, 8, -8, 17, -0.5, 0, 2, 1});
auto b = backend->create_tensor(element::f32, shape);
copy_data(b, vector<float>{1, 2, -8, 8, 0, 0, 0.5, 1.5});
auto result = backend->create_tensor(element::boolean, shape);

auto handle = backend->compile(f);
handle->call_with_validate({result}, {a, b});
EXPECT_EQ((vector<char>{1, 0, 1, 0, 1, 1, 0, 1}), read_vector<char>(result));
}

NGRAPH_TEST(${BACKEND_NAME}, lesseq_int32)
{
Shape shape{2, 2};
auto A = make_shared<op::Parameter>(element::i32, shape);
auto B = make_shared<op::Parameter>(element::i32, shape);
auto f = make_shared<Function>(make_shared<op::v1::LessEqual>(A, B), ParameterVector{A, B});

auto backend = runtime::Backend::create("${BACKEND_NAME}");

// Create some tensors for input/output
auto a = backend->create_tensor(element::i32, shape);
copy_data(a, vector<int32_t>{0x40000170, 0x40000005, 0x40000005, -5});
auto b = backend->create_tensor(element::i32, shape);
copy_data(b, vector<int32_t>{0x40000140, 0x40000001, 0x40000005, 0});
auto result = backend->create_tensor(element::boolean, shape);

auto handle = backend->compile(f);
handle->call_with_validate({result}, {a, b});
EXPECT_EQ((vector<char>{0, 0, 1, 1}), read_vector<char>(result)); // NNP result {1, 1, 0, 1}
}

NGRAPH_TEST(${BACKEND_NAME}, lesseq_bool)
{
Shape shape{2, 2, 2};
auto A = make_shared<op::Parameter>(element::boolean, shape);
auto B = make_shared<op::Parameter>(element::boolean, shape);
auto f = make_shared<Function>(make_shared<op::v1::LessEqual>(A, B), ParameterVector{A, B});

auto backend = runtime::Backend::create("${BACKEND_NAME}");

// Create some tensors for input/output
auto a = backend->create_tensor(element::boolean, shape);
copy_data(a, vector<char>{1, 1, 1, 1, 1, 1, 1, 1});
auto b = backend->create_tensor(element::boolean, shape);
copy_data(b, vector<char>{0, 0, 0, 0, 0, 0, 0, 0});
auto result = backend->create_tensor(element::boolean, shape);

// Overwrite the initial result vector to make sure we're not just coincidentally getting the
// right value.
copy_data(result, vector<char>{1, 1, 1, 1, 1, 1, 1, 1});

auto handle = backend->compile(f);
handle->call_with_validate({result}, {a, b});
EXPECT_EQ((vector<char>{0, 0, 0, 0, 0, 0, 0, 0}), read_vector<char>(result));
}
3 changes: 0 additions & 3 deletions ngraph/test/runtime/ie/unit_test.manifest
Original file line number Diff line number Diff line change
Expand Up @@ -426,7 +426,6 @@ notequal
greater
greatereq
less
lesseq
sum_3d_to_scalar_int32
sum_2d_to_scalar_int8
max_pool_uint8
Expand Down Expand Up @@ -486,7 +485,6 @@ logical_xor
logical_or
logical_and
gather_axis_0_bool
lesseq_bool
auto_bcast_binary_elementwise
auto_bcast_binary_elementwise_pdpd
any_2x2_to_scalar_true
Expand Down Expand Up @@ -755,7 +753,6 @@ strided_slice_stride_optional
divide_int32
divide_cpp_rounding_int32
divide_python_rounding_int32
lesseq_int32

# Constant and Low Precision
constant_equality_u4_2x2x3
Expand Down