Skip to content
This repository was archived by the owner on Jul 1, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from 9 commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
9885f38
Re-organized the operators source files.
eaplatanios Apr 1, 2019
3ce90c0
Added support for 'stacked', 'concatenated', 'gathered', 'batchGather…
eaplatanios Apr 1, 2019
19a3add
Reverted back to 4-space tabs.
eaplatanios Apr 1, 2019
b3f6281
Made some other minor changes.
eaplatanios Apr 1, 2019
111d96c
Added support or 'selecting'.
eaplatanios Apr 1, 2019
371021b
Added support for 'nonZeroIndices'.
eaplatanios Apr 1, 2019
112707b
Minor edits.
eaplatanios Apr 1, 2019
3594e0e
Addressed Richard's feedback.
eaplatanios Apr 1, 2019
adf20eb
Addressed Richard's comments.
eaplatanios Apr 1, 2019
4ae4e08
Addressed Richard's comments.
eaplatanios Apr 2, 2019
b0aba5d
Updated the convolution ops to support explicit paddings.
eaplatanios Apr 2, 2019
05704d0
Small edits.
eaplatanios Apr 2, 2019
a686a76
Updated the convolution ops to support explicit paddings.
eaplatanios Apr 2, 2019
cc46658
Small fix.
eaplatanios Apr 2, 2019
8494c04
Small fix.
eaplatanios Apr 2, 2019
976061f
Added a new tensor initializer from ranges of tensors.
eaplatanios Apr 2, 2019
5dfaaee
Added documentation string for the "explicit" padding scheme.
eaplatanios Apr 2, 2019
eda9514
More fixes.
eaplatanios Apr 3, 2019
77c1385
Merge branch 'conv-fix' of github.com:eaplatanios/swift-apis into con…
eaplatanios Apr 3, 2019
b41c8c4
Some fixes.
eaplatanios Apr 3, 2019
aed430a
Added 'zerosLike' and 'onesLike' tensor initializers.
eaplatanios Apr 12, 2019
5a093a8
Added a new 'stacking' tensor initializer and made some compatibility…
eaplatanios Apr 15, 2019
78a3eab
Merge remote-tracking branch 'upstream/master' into working
eaplatanios Apr 15, 2019
467a443
Added a new 'tiling' tensor initializer.
eaplatanios Apr 15, 2019
1faaef4
Minor edit.
eaplatanios Apr 15, 2019
94cf85f
Made some refactoring.
eaplatanios Apr 15, 2019
e0bbfc0
Bug fix.
eaplatanios Apr 15, 2019
b74079d
Merged upstream changes.
eaplatanios Apr 18, 2019
6c04368
Added support for the split op and its VJP.
eaplatanios Apr 19, 2019
ca8ce02
Added VJPs for stacking and tiling.
eaplatanios Apr 19, 2019
26e9123
Added VJP for concatenating.
eaplatanios Apr 19, 2019
87db644
Added the gathering VJP.
eaplatanios Apr 19, 2019
49bfe8d
Bug fixes.
eaplatanios Apr 19, 2019
10de441
Added an 'Optimizable' protocol.
eaplatanios Apr 19, 2019
64159f2
Merged upstream changes.
eaplatanios Apr 19, 2019
a630396
Moved some more activation functions from the stdlib.
eaplatanios Apr 19, 2019
c3243f4
Added log-softmax VJP.
eaplatanios Apr 19, 2019
4547a6d
Minor bug fix.
eaplatanios Apr 20, 2019
19cdbd9
Brought some initializers from stdlib.
eaplatanios Apr 20, 2019
a16d911
Brought some more stuff from the stdlib.
eaplatanios Apr 20, 2019
34b475a
Minor edit.
eaplatanios Apr 20, 2019
86072a4
Moved some more stuff to swift-apis.
eaplatanios Apr 20, 2019
bc0a581
Removed all the newly-added ops.
eaplatanios Apr 20, 2019
a91c00a
Moved some more stuff to swift-apis.
eaplatanios Apr 20, 2019
0ad9843
Moved some more stuff to swift-apis.
eaplatanios Apr 20, 2019
1120692
Added a README file to the 'Operators' source directory.
eaplatanios Apr 20, 2019
e7a04d2
Brought the gradient helper functions from the stdlib.
eaplatanios Apr 20, 2019
3ee21ff
Bug fixes.
eaplatanios Apr 20, 2019
ef1c73b
Brought the tensor tests from the stdlib.
eaplatanios Apr 20, 2019
0e06843
Minor bug fix.
eaplatanios Apr 20, 2019
eb407cf
Addressed Richard's comments.
eaplatanios Apr 20, 2019
f4b7e01
Minor edit.
eaplatanios Apr 20, 2019
b207e42
Reverted the change in the existing optimizer implementations.
eaplatanios Apr 20, 2019
3dcd46d
Added VJPs for some operations.
eaplatanios Apr 20, 2019
5548c56
Incorporated fix from stdlib.
eaplatanios Apr 21, 2019
89fb4e4
Addressed Richard's feedback.
eaplatanios Apr 21, 2019
61eae26
Changed the indentation in the 'PythonConversion.swift' file.
eaplatanios Apr 21, 2019
3cdd808
Changed the indentation in the 'Random.swift' file.
eaplatanios Apr 21, 2019
4b87827
Minor edit.
eaplatanios Apr 21, 2019
a5edd32
Tabs to spaces.
eaplatanios Apr 21, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 10 additions & 2 deletions Sources/DeepLearning/Helpers.swift
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,20 @@
// limitations under the License.

#if !COMPILING_TENSORFLOW_MODULE
import TensorFlow
@_exported import TensorFlow
#endif

/// Returns a tensor with the same shape and scalars as the specified tensor.
@inlinable
@differentiable
public func identity<Scalar>(_ x: Tensor<Scalar>) -> Tensor<Scalar> {
return x
}

// `pow` is defined in Darwin/Glibc on `Float` and `Double`, but there doesn't exist a generic
// version for `FloatingPoint`.
// This is a manual definition.
func pow<T : BinaryFloatingPoint>(_ x: T, _ y: T) -> T {
@inlinable
func pow<T: BinaryFloatingPoint>(_ x: T, _ y: T) -> T {
return T(pow(Double(x), Double(y)))
}
6 changes: 5 additions & 1 deletion Sources/DeepLearning/Initializers.swift
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,13 @@
// limitations under the License.

#if !COMPILING_TENSORFLOW_MODULE
@_exported import TensorFlow
import TensorFlow
#endif

//===------------------------------------------------------------------------------------------===//
// Random
//===------------------------------------------------------------------------------------------===//

public extension Tensor where Scalar == Int32 {
/// Creates a tensor with the specified shape, randomly sampling scalar values
/// from a discrete uniform distribution.
Expand Down
2 changes: 1 addition & 1 deletion Sources/DeepLearning/Layer.swift
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
// limitations under the License.

#if !COMPILING_TENSORFLOW_MODULE
@_exported import TensorFlow
import TensorFlow
#endif

/// A value that indicates either a training phase or an inference phase for a layer.
Expand Down
303 changes: 303 additions & 0 deletions Sources/DeepLearning/Operators/Basic.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,303 @@
// Copyright 2018 The TensorFlow Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#if !COMPILING_TENSORFLOW_MODULE
import TensorFlow
#endif

public extension Tensor where Scalar: TensorFlowScalar {
/// Stacks the current tensor with `tensors`, along the `axis` dimension, into a tensor with
/// rank one higher than the current tensor and each tensor in `tensors`.
///
/// Given `self` and `tensors` all have shape `[A, B, C]`, and `tensors.count = N-1`, then:
/// - if `axis == 0` then the resulting tensor will have the shape `[N, A, B, C]`.
/// - if `axis == 1` then the resulting tensor will have the shape `[A, N, B, C]`.
/// - etc.
///
/// For example:
/// ```
/// // 'x' is [1, 4]
/// // 'y' is [2, 5]
/// // 'z' is [3, 6]
/// x.stacked(with: [y, z]) // is [[1, 4], [2, 5], [3, 6]]
/// x.stacked(with: [y, z], alongAxis: 1) // is [[1, 2, 3], [4, 5, 6]]
/// ```
///
/// This is the opposite of `unstacked`.
///
/// - Parameters:
/// - tensors: Tensors to stack with the current tensor.
/// - axis: Dimension along which to stack. Negative values wrap around.
///
/// - Precondition: All tensors must have the same shape as the current tensor.
/// - Precondition: `axis` must be in the range `[-rank, rank)`.
///
/// - Returns: The stacked tensor.
@inlinable
// @differentiable(vjp: _vjpStacked where Scalar: TensorFlowFloatingPoint)
func stacked(with tensors: [Tensor], alongAxis axis: Int64 = 0) -> Tensor {
return Raw.pack([self] + tensors, axis: axis)
}

/// Concatenates the current tensor with `tensors` along the `axis` dimension.
///
/// Given `self` and `tensors` are all put in a single array, `values`, and
/// `values[i].shape = [D0, D1, ... Daxis(i), ...Dn]`, then the concatenated result has shape
/// `[D0, D1, ... Raxis, ...Dn]`, where `Raxis = sum(Daxis(i))`. That is, the data from the
/// input tensors is joined along the `axis` dimension.
///
/// For example:
/// ```
/// // t1 is [[1, 2, 3], [4, 5, 6]]
/// // t2 is [[7, 8, 9], [10, 11, 12]]
/// t1.concatenated(with: [t2]) // is [[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]]
/// t1.concatenated(with: [t2], alongAxis: 1) // is [[1, 2, 3, 7, 8, 9], [4, 5, 6, 10, 11, 12]]
///
/// // t3 has shape [2, 3]
/// // t4 has shape [2, 3]
/// t3.concatenated(with: [t4]) // has shape [4, 3]
/// t3.concatenated(with: [t4], alongAxis: 1) // has shape [2, 6]
/// ```
///
/// - Note: If you are concatenating along a new axis consider using `stacked`.
///
/// - Parameters:
/// - tensors: Tensors to concatenate with the current tensor.
/// - axis: Dimension along which to concatenate. Negative values wrap around.
///
/// - Precondition: All tensors must have the same rank as the current tensor and all dimensions
/// except `axis` must be equal.
/// - Precondition: `axis` must be in the range `[-rank, rank)`.
///
/// - Returns: The concatenated tensor.
@inlinable
// @differentiable(vjp: _vjpConcatenated where Scalar : TensorFlowFloatingPoint)
func concatenated(with tensors: [Tensor], alongAxis axis: Int32 = 0) -> Tensor {
return Raw.concatV2([self] + tensors, axis: Tensor<Int32>(axis))
}

/// Gathers slices of this tensor at `indices` along the `axis` dimension.
///
/// For 0-D (scalar) `indices`:
/// ```
/// result[p_0, ..., p_{axis-1},
/// p_{axis + 1}, ..., p_{N-1}] =
/// self[p_0, ..., p_{axis-1},
/// indices,
/// p_{axis + 1}, ..., p_{N-1}]
/// ```
///
/// For 1-D (vector) `indices`:
/// ```
/// result[p_0, ..., p_{axis-1},
/// i,
/// p_{axis + 1}, ..., p_{N-1}] =
/// self[p_0, ..., p_{axis-1},
/// indices[i],
/// p_{axis + 1}, ..., p_{N-1}]
/// ```
///
/// In the general case, produces a resulting tensor where:
/// ```
/// result[p_0, ..., p_{axis-1},
/// i_{batch\_dims}, ..., i_{M-1},
/// p_{axis + 1}, ..., p_{N-1}] =
/// self[p_0, ..., p_{axis-1},
/// indices[i_0, ..., i_{M-1}],
/// p_{axis + 1}, ..., p_{N-1}]
/// ```
/// where `N = self.rank` and `M = indices.rank`.
///
/// The shape of the resulting tensor is:
/// `self.shape[..<axis] + indices.shape + self.shape[(axis + 1)...]`.
///
/// - Note: On CPU, if an out-of-range index is found, an error is thrown. On GPU, if an
/// out-of-range index is found, a 0 is stored in the corresponding output values.
///
/// - Parameters:
/// - indices: Contains the indices to gather.
/// - axis: Dimension along which to gather. Negative values wrap around.
///
/// - Precondition: `axis` must be in the range `[-rank, rank)`.
///
/// - Returns: The gathered tensor.
@inlinable
// @differentiable(vjp: _vjpGathering where Scalar: TensorFlowFloatingPoint)
func gathering<I: TensorFlowInteger>(
atIndices indices: Tensor<I>,
alongAxis axis: Int32 = 0
) -> Tensor {
return Raw.gatherV2(params: self, indices: indices, axis: Tensor<Int32>(axis))
}

/// Gathers slices of this tensor at `indices` along the `axis` dimension, while ignoring the
/// first `batchDims` dimensions that correspond to batch dimensions.
///
/// Performs similar functionality to `gathering`, except that the resulting tensor shape is now:
/// `self.shape[..<axis] + indices.shape[batchDims...] + self.shape[(axis + 1)...]`.
///
/// - Parameters:
/// - indices: Contains the indices to gather.
/// - axis: Dimension along which to gather. Negative values wrap around.
/// - batchDims: Number of leading batch dimensions to ignore.
///
/// - Precondition: `axis` must be in the range `[-rank, rank)`, while also being greater than
/// or equal to `batchDims`.
/// - Precondition: `batchDims` must be less than `indices.rank`.
///
/// - Returns: The gathered tensor.
@inlinable
func batchGathering<I: TensorFlowInteger>(
atIndices indices: Tensor<I>,
alongAxis axis: Int32,
numBatchDims batchDims: Int32
) -> Tensor {
precondition(batchDims >= 0 && batchDims < indices.rank,
"'numBatchDims' must be non-negative and less than 'indices.rank'.")
precondition(batchDims < rank, "'numBatchDims' must be less than the tensor's rank.")

// Handle the axis argument by transposing the axis dimension so that it is the first
// non-batch dimension, recursively calling `batchGathering` with `axis = 0`, and then
// transposing the result to put the pre-axis dimensions before the indices dimensions.
if axis != batchDims {
// Adjust axis to be positive.
let posAxis = axis < 0 ? axis + rank : axis

precondition(posAxis >= 0 && posAxis < rank, "'axis' is out of range.")
precondition(batchDims <= posAxis, "'batchDims' must be less than or equal to 'axis'.")

// Move self[axis] up to self[batchDims].
let permutation = Tensor<Int32>(0 ..< batchDims).concatenated(with: [
Tensor<Int32>(axis).rankLifted(),
Tensor<Int32>(rangeFrom: batchDims, to: posAxis, stride: 1),
Tensor<Int32>(rangeFrom: axis + 1, to: rank, stride: 1)])
let tensor = transposed(withPermutations: permutation)
let result = tensor.batchGathering(
atIndices: indices, alongAxis: batchDims, numBatchDims: batchDims)

// Move the result dimensions corresponding to self[batchDims ..< axis] to just before
// the dimensions corresponding to indices[batchDims ...].
let start = indices.rank + posAxis - batchDims
let resultPermutation = Tensor<Int32>(0 ..< batchDims).concatenated(with: [
Tensor<Int32>(rangeFrom: indices.rank, to: start, stride: 1),
Tensor<Int32>(batchDims ..< indices.rank),
Tensor<Int32>(rangeFrom: start, to: result.rank, stride: 1)])
return result.transposed(withPermutations: resultPermutation)
}

let castedShape = Tensor<I>(shapeTensor)
var batchIndices = indices
var accumulated = Tensor<I>(ones: [])
for d in (1...batchDims).reversed() {
accumulated *= castedShape[d]
let dValue = castedShape[d - 1]
let dIndices = Tensor<I>(
rangeFrom: Tensor<I>(zeros: []),
to: dValue,
stride: Tensor<I>(ones: [])
) * accumulated
let dShape = Tensor<Int32>(d - 1).stacked(with: [
Tensor<Int32>(dValue),
Tensor<Int32>(indices.rank - 1)])
batchIndices += dIndices.reshaped(toShape: dShape)
}

let flatIndices = batchIndices.flattened()
let outerShape = shapeTensor[Int(batchDims + 1)...]
let innerShape = shapeTensor[..<Int(batchDims + 1)].product(squeezingAxes: [0])
let flatTensor = reshaped(toShape: innerShape.rankLifted().concatenated(with: outerShape))
let flatResult = flatTensor.gathering(atIndices: flatIndices)
return flatResult.reshaped(toShape: indices.shapeTensor.concatenated(with: outerShape))
}

/// Applies the provided boolean mask to this tensor.
///
/// For example:
/// ```
/// // 1-D example
/// // tensor is [0, 1, 2, 3]
/// // mask is [true, false, true, false]
/// tensor.masked(with: mask) // is [0, 2]
///
/// // 2-D example
/// // tensor is [[1, 2], [3, 4], [5, 6]]
/// // mask is [true, false, true]
/// tensor.masked(with: mask) // is [[1, 2], [5, 6]]
/// ```
///
/// In general, `0 < mask.rank = K <= tensor.rank`, and the `mask`'s shape must match the first
/// K dimensions of the `tensor`'s shape. We then have:
/// `tensor.masked(with: mask)[i, j1, ..., jd] = tensor[i1, ..., iK, j1, ..., jd]`, where
/// `[i1, ..., iK]` is the `i`th `true` entry of `mask` (row-major order).
///
/// The `axis` could be used with `mask` to indicate the axis to mask from. In that case,
/// `axis + mask.rank <= tensor.rank` and the `mask``'s shape must match the first
/// `axis + mask.rank` dimensions of the `tensor`'s shape.
///
/// - Parameters:
/// - mask: K-D boolean tensor, where `K <= self.rank`.
/// - axis: 0-D integer tensor representing the axis in `self` to mask from, where
/// `K + axis <= self.rank`.
///
/// - Precondition: The `mask` cannot be a scalar: `mask.rank != 0`.
///
/// - Returns: `(self.rank - K + 1)`-dimensional tensor populated by entries in this tensor
/// corresponding to `true` values in `mask`.
@inlinable
func masked(with mask: Tensor<Bool>, alongAxis axis: Int32 = 0) -> Tensor {
precondition(mask.rank != 0, "The boolean mask cannot be a scalar.")
let posAxis = axis < 0 ? axis + rank : axis
let leadingSize = shapeTensor[posAxis ..< posAxis + mask.rank].product().rankLifted()
let reshapedTensor = reshaped(
toShape: shapeTensor[..<Int(posAxis)].concatenated(
with: [leadingSize, shapeTensor[Int(posAxis + mask.rank)...]]))
let indices = mask.flattened().whereTrue().squeezingShape(at: 1)
return reshapedTensor.gathering(atIndices: indices, alongAxis: posAxis)
}
}

public extension Tensor {
/// Returns the locations of non-zero / true values in this tensor.
///
/// The coordinates are returned in a 2-D tensor where the first dimension (rows) represents the
/// number of non-zero elements, and the second dimension (columns) represents the coordinates
/// of the non-zero elements. Keep in mind that the shape of the output tensor can vary
/// depending on how many true values there are in this tensor. Indices are output in row-major
/// order.
///
/// For example:
/// ```
/// // 'input' is [[true, false], [true, false]]
/// // 'input' has 2 true values and so the output has 2 rows.
/// // 'input' has rank of 2, and so the second dimension of the output has size 2.
/// input.nonZeroIndices() // is [[0, 0], [1, 0]]
///
/// // 'input' is [[[ true, false], [ true, false]],
/// // [[false, true], [false, true]],
/// // [[false, false], [false, true]]]
/// // 'input' has 5 true values and so the output has 5 rows.
/// // 'input' has rank 3, and so the second dimension of the output has size 3.
/// input.nonZeroIndices() // is [[0, 0, 0],
/// // [0, 1, 0],
/// // [1, 0, 1],
/// // [1, 1, 1],
/// // [2, 1, 1]]
/// ```
///
/// - Returns: A tensor with shape `(num_true, rank(condition))`.
@inlinable
func nonZeroIndices() -> Tensor<Int64> {
return Raw.where_(self)
}
}
22 changes: 22 additions & 0 deletions Sources/DeepLearning/Operators/Math.swift
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
// Copyright 2018 The TensorFlow Authors. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.

#if !COMPILING_TENSORFLOW_MODULE
import TensorFlow
#endif

/// Returns the values of the specified tensor rounded to the nearest integer, element-wise.
public func round<Scalar: BinaryFloatingPoint>(_ x: Tensor<Scalar>) -> Tensor<Scalar> {
return Raw.round(x)
}
Loading