(deprecated) " + depMsg + "
"; if (summary != null && !summary.equals("")) { - result = result + " - " + summary; + result = result + "\n" + summary; } return result; } @@ -495,7 +495,7 @@ void populateUidValues(List
- * It wrapped in ThreadLocal because of its non-thread safe nature
- */
- private static ThreadLocal ======================= SpeechClient =======================
+ *
+ * Service Description: Service that implements Google Cloud Speech API.
+ *
+ * Sample for SpeechClient:
+ *
+ * ======================= AdaptationClient =======================
+ *
+ * Service Description: Service that implements Google Cloud Speech Adaptation API.
+ *
+ * Sample for AdaptationClient:
+ *
+ * (deprecated) As of version 1.1, use . . . instead This is a simple description of the method. . .\n Superman!\n (deprecated) Use (deprecated) Some text (deprecated) This one is deprecated :( \n Usage guidelines:\n Note: This method does not support applying settings to streaming methods."
syntax:
content: "public ProductSearchSettings.Builder applyToAllUnaryMethods(ApiFunction The default instance has everything set to sensible defaults:\n\n The builder of this class is recursive, so contained classes are themselves builders. When\n build() is called, the tree of builders is called to create the complete settings object.\n\n For example, to set the total timeout of createProductSet to 30 seconds:\n\n (deprecated) Use This class provides the ability to make remote calls to the backing service through method\n calls that map to API methods. Sample code to get started:\n\n Note: close() needs to be called on the SpeechClient object to clean up resources such as\n threads. In the example above, try-with-resources is used, which automatically calls close().\n\n The surface of this class includes several types of Java methods for each of the API's\n methods:\n\n See the individual methods for example code.\n\n Many parameters require resource names to be formatted in a particular way. To assist with\n these names, this class includes a format method for each type of name, and additionally a parse\n method to extract the individual identifiers contained within names that are returned.\n\n This class can be customized by passing in a custom instance of SpeechSettings to create().\n For example:\n\n To customize credentials:\n\n To customize the endpoint:\n\n Please refer to the GitHub repository's samples for more quickstart code snippets."
syntax:
content: "public class SpeechClient implements BackgroundResource"
inheritance:
@@ -79,7 +79,7 @@ items:
overload: "com.microsoft.samples.google.SpeechClient.SpeechClient*"
type: "Constructor"
package: "com.microsoft.samples.google"
- summary: "Constructs an instance of SpeechClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred."
+ summary: "Constructs an instance of SpeechClient, using the given settings. This is protected so that it\n is easy to make a subclass, but otherwise, the static factory methods should be preferred."
syntax:
content: "protected SpeechClient(SpeechSettings settings)"
parameters:
@@ -149,7 +149,7 @@ items:
overload: "com.microsoft.samples.google.SpeechClient.create*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "Constructs an instance of SpeechClient, using the given stub for making calls. This is for advanced usage - prefer using create(SpeechSettings)."
+ summary: "Constructs an instance of SpeechClient, using the given stub for making calls. This is for\n advanced usage - prefer using create(SpeechSettings)."
syntax:
content: "public static final SpeechClient create(SpeechStub stub)"
parameters:
@@ -168,7 +168,7 @@ items:
overload: "com.microsoft.samples.google.SpeechClient.create*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "Constructs an instance of SpeechClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set."
+ summary: "Constructs an instance of SpeechClient, using the given settings. The channels are created\n based on the settings passed in, or defaults for any settings that are not set."
syntax:
content: "public static final SpeechClient create(SpeechSettings settings)"
parameters:
@@ -189,7 +189,7 @@ items:
overload: "com.microsoft.samples.google.SpeechClient.getOperationsClient*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "Returns the OperationsClient that can be used to query the status of a long-running operation returned by another API method call."
+ summary: "Returns the OperationsClient that can be used to query the status of a long-running operation\n returned by another API method call."
syntax:
content: "public final OperationsClient getOperationsClient()"
return:
@@ -265,7 +265,7 @@ items:
overload: "com.microsoft.samples.google.SpeechClient.longRunningRecognizeAsync*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an \\`Operation.error\\` or an \\`Operation.response\\` which contains a \\`LongRunningRecognizeResponse\\` message. For more information on asynchronous speech recognition, see the \\[how-to\\](https://cloud.google.com/speech-to-text/docs/async-recognize).\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();\n }\n```"
+ summary: "Performs asynchronous speech recognition: receive results via the google.longrunning.Operations\n interface. Returns either an Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Note: This method does not support applying settings to streaming methods."
syntax:
content: "public SpeechSettings.Builder applyToAllUnaryMethods(ApiFunction The default instance has everything set to sensible defaults:\n\n The builder of this class is recursive, so contained classes are themselves builders. When\n build() is called, the tree of builders is called to create the complete settings object.\n\n For example, to set the total timeout of recognize to 30 seconds:\n\n This class provides the ability to make remote calls to the backing service through method\n calls that map to API methods. Sample code to get started:\n\n Note: close() needs to be called on the SpeechClient object to clean up resources such as\n threads. In the example above, try-with-resources is used, which automatically calls close().\n\n The surface of this class includes several types of Java methods for each of the API's\n methods:\n\n See the individual methods for example code.\n\n Many parameters require resource names to be formatted in a particular way. To assist with\n these names, this class includes a format method for each type of name, and additionally a parse\n method to extract the individual identifiers contained within names that are returned.\n\n This class can be customized by passing in a custom instance of SpeechSettings to create().\n For example:\n\n To customize credentials:\n\n To customize the endpoint:\n\n Please refer to the GitHub repository's samples for more quickstart code snippets."
syntax:
content: "public class SpeechClient implements BackgroundResource"
inheritance:
@@ -79,7 +79,7 @@ items:
overload: "com.microsoft.samples.google.v1beta.SpeechClient.SpeechClient*"
type: "Constructor"
package: "com.microsoft.samples.google.v1beta"
- summary: "Constructs an instance of SpeechClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred."
+ summary: "Constructs an instance of SpeechClient, using the given settings. This is protected so that it\n is easy to make a subclass, but otherwise, the static factory methods should be preferred."
syntax:
content: "protected SpeechClient(SpeechSettings settings)"
parameters:
@@ -149,7 +149,7 @@ items:
overload: "com.microsoft.samples.google.v1beta.SpeechClient.create*"
type: "Method"
package: "com.microsoft.samples.google.v1beta"
- summary: "Constructs an instance of SpeechClient, using the given stub for making calls. This is for advanced usage - prefer using create(SpeechSettings)."
+ summary: "Constructs an instance of SpeechClient, using the given stub for making calls. This is for\n advanced usage - prefer using create(SpeechSettings)."
syntax:
content: "public static final SpeechClient create(SpeechStub stub)"
parameters:
@@ -168,7 +168,7 @@ items:
overload: "com.microsoft.samples.google.v1beta.SpeechClient.create*"
type: "Method"
package: "com.microsoft.samples.google.v1beta"
- summary: "Constructs an instance of SpeechClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set."
+ summary: "Constructs an instance of SpeechClient, using the given settings. The channels are created\n based on the settings passed in, or defaults for any settings that are not set."
syntax:
content: "public static final SpeechClient create(SpeechSettings settings)"
parameters:
@@ -189,7 +189,7 @@ items:
overload: "com.microsoft.samples.google.v1beta.SpeechClient.getOperationsClient*"
type: "Method"
package: "com.microsoft.samples.google.v1beta"
- summary: "Returns the OperationsClient that can be used to query the status of a long-running operation returned by another API method call."
+ summary: "Returns the OperationsClient that can be used to query the status of a long-running operation\n returned by another API method call."
syntax:
content: "public final OperationsClient getOperationsClient()"
return:
@@ -265,7 +265,7 @@ items:
overload: "com.microsoft.samples.google.v1beta.SpeechClient.longRunningRecognizeAsync*"
type: "Method"
package: "com.microsoft.samples.google.v1beta"
- summary: "Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an \\`Operation.error\\` or an \\`Operation.response\\` which contains a \\`LongRunningRecognizeResponse\\` message. For more information on asynchronous speech recognition, see the \\[how-to\\](https://cloud.google.com/speech-to-text/docs/async-recognize).\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();\n }\n```"
+ summary: "Performs asynchronous speech recognition: receive results via the google.longrunning.Operations\n interface. Returns either an Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n This class provides the ability to make remote calls to the backing service through method\n calls that map to API methods. Sample code to get started:\n\n Note: close() needs to be called on the SpeechClient object to clean up resources such as\n threads. In the example above, try-with-resources is used, which automatically calls close().\n\n The surface of this class includes several types of Java methods for each of the API's\n methods:\n\n See the individual methods for example code.\n\n Many parameters require resource names to be formatted in a particular way. To assist with\n these names, this class includes a format method for each type of name, and additionally a parse\n method to extract the individual identifiers contained within names that are returned.\n\n This class can be customized by passing in a custom instance of SpeechSettings to create().\n For example:\n\n To customize credentials:\n\n To customize the endpoint:\n\n Please refer to the GitHub repository's samples for more quickstart code snippets."
syntax:
content: "public class SpeechClient implements BackgroundResource"
inheritance:
@@ -79,7 +79,7 @@ items:
overload: "com.microsoft.samples.google.v1p1alpha.SpeechClient.SpeechClient*"
type: "Constructor"
package: "com.microsoft.samples.google.v1p1alpha"
- summary: "Constructs an instance of SpeechClient, using the given settings. This is protected so that it is easy to make a subclass, but otherwise, the static factory methods should be preferred."
+ summary: "Constructs an instance of SpeechClient, using the given settings. This is protected so that it\n is easy to make a subclass, but otherwise, the static factory methods should be preferred."
syntax:
content: "protected SpeechClient(SpeechSettings settings)"
parameters:
@@ -149,7 +149,7 @@ items:
overload: "com.microsoft.samples.google.v1p1alpha.SpeechClient.create*"
type: "Method"
package: "com.microsoft.samples.google.v1p1alpha"
- summary: "Constructs an instance of SpeechClient, using the given stub for making calls. This is for advanced usage - prefer using create(SpeechSettings)."
+ summary: "Constructs an instance of SpeechClient, using the given stub for making calls. This is for\n advanced usage - prefer using create(SpeechSettings)."
syntax:
content: "public static final SpeechClient create(SpeechStub stub)"
parameters:
@@ -168,7 +168,7 @@ items:
overload: "com.microsoft.samples.google.v1p1alpha.SpeechClient.create*"
type: "Method"
package: "com.microsoft.samples.google.v1p1alpha"
- summary: "Constructs an instance of SpeechClient, using the given settings. The channels are created based on the settings passed in, or defaults for any settings that are not set."
+ summary: "Constructs an instance of SpeechClient, using the given settings. The channels are created\n based on the settings passed in, or defaults for any settings that are not set."
syntax:
content: "public static final SpeechClient create(SpeechSettings settings)"
parameters:
@@ -189,7 +189,7 @@ items:
overload: "com.microsoft.samples.google.v1p1alpha.SpeechClient.getOperationsClient*"
type: "Method"
package: "com.microsoft.samples.google.v1p1alpha"
- summary: "Returns the OperationsClient that can be used to query the status of a long-running operation returned by another API method call."
+ summary: "Returns the OperationsClient that can be used to query the status of a long-running operation\n returned by another API method call."
syntax:
content: "public final OperationsClient getOperationsClient()"
return:
@@ -265,7 +265,7 @@ items:
overload: "com.microsoft.samples.google.v1p1alpha.SpeechClient.longRunningRecognizeAsync*"
type: "Method"
package: "com.microsoft.samples.google.v1p1alpha"
- summary: "Performs asynchronous speech recognition: receive results via the google.longrunning.Operations interface. Returns either an \\`Operation.error\\` or an \\`Operation.response\\` which contains a \\`LongRunningRecognizeResponse\\` message. For more information on asynchronous speech recognition, see the \\[how-to\\](https://cloud.google.com/speech-to-text/docs/async-recognize).\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();\n }\n```"
+ summary: "Performs asynchronous speech recognition: receive results via the google.longrunning.Operations\n interface. Returns either an Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n Sample code:\n\n \n Or this Service Description: Service that implements Google Cloud Speech API.\n\n Sample for SpeechClient:\n\n Service Description: Service that implements Google Cloud Speech Adaptation API.\n\n Sample for AdaptationClient:\n\n ([^<]+)
","$1")
+ .replaceAll("", "
")
+ .replaceAll("`([^`]+)`", "
$1
")
+ .replaceAll("\\[([^]]+)]\\(([^)]+)\\)", "$1")
+ .replaceAll("\\[([^]]+)]\\[([^]]+)\\]", "$1
");
}
}
diff --git a/third_party/docfx-doclet-143274/src/test/java/com/microsoft/samples/package-info.java b/third_party/docfx-doclet-143274/src/test/java/com/microsoft/samples/package-info.java
index e2126573..8bfaebaa 100644
--- a/third_party/docfx-doclet-143274/src/test/java/com/microsoft/samples/package-info.java
+++ b/third_party/docfx-doclet-143274/src/test/java/com/microsoft/samples/package-info.java
@@ -1,4 +1,51 @@
+/*
+ * Copyright 2021 Google LLC
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ * https://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
/**
- * This package contains the sample set of classes for testing DocFx doclet.
+ * The interfaces provided are listed below, along with usage samples.
+ *
+ * {@code
+ * try (SpeechClient speechClient = SpeechClient.create()) {
+ * RecognitionConfig config = RecognitionConfig.newBuilder().build();
+ * RecognitionAudio audio = RecognitionAudio.newBuilder().build();
+ * RecognizeResponse response = speechClient.recognize(config, audio);
+ * }
+ * }
+ *
+ * {@code
+ * try (AdaptationClient adaptationClient = AdaptationClient.create()) {
+ * LocationName parent = LocationName.of("[PROJECT]", "[LOCATION]");
+ * PhraseSet phraseSet = PhraseSet.newBuilder().build();
+ * String phraseSetId = "phraseSetId959902180";
+ * PhraseSet response = adaptationClient.createPhraseSet(parent, phraseSet, phraseSetId);
+ * }
+ * }
*/
+@Generated("by gapic-generator-java")
package com.microsoft.samples;
+
+import javax.annotation.Generated;
diff --git a/third_party/docfx-doclet-143274/src/test/java/com/microsoft/util/YamlUtilTest.java b/third_party/docfx-doclet-143274/src/test/java/com/microsoft/util/YamlUtilTest.java
index 955f41fa..7b290f56 100644
--- a/third_party/docfx-doclet-143274/src/test/java/com/microsoft/util/YamlUtilTest.java
+++ b/third_party/docfx-doclet-143274/src/test/java/com/microsoft/util/YamlUtilTest.java
@@ -9,10 +9,10 @@
import java.io.File;
import java.io.IOException;
import java.util.Collections;
+import java.util.UUID;
import static java.nio.charset.StandardCharsets.UTF_8;
-import static org.junit.Assert.assertEquals;
-import static org.junit.Assert.assertNull;
+import static org.junit.Assert.*;
public class YamlUtilTest {
@@ -45,29 +45,92 @@ public void objectToYamlString() {
+ " description: \"Some desc 5\"\n");
}
+ private MetadataFileItem buildMetadataFileItem(int seed) {
+ MetadataFileItem metadataFileItem = new MetadataFileItem("Some uid " + seed);
+ metadataFileItem.setId("Some id" + seed);
+ metadataFileItem.setHref("Some href" + seed);
+ metadataFileItem.setParameters(Collections.singletonList(
+ new MethodParameter("Some id " + seed, "Some type " + seed, "Some desc " + seed)));
+
+ return metadataFileItem;
+ }
+
+
@Test
- public void convertHtmlToMarkdown() throws IOException {
- String text = FileUtils.readFileToString(new File("target/test-classes/html2md/initial.html"), UTF_8);
- String expectedResult = FileUtils.readFileToString(new File("target/test-classes/html2md/converted.md"), UTF_8);
+ public void cleanupHtmlRemoveLonePreTagsTest() {
+ String expectedActual = "text
";
+ String expectedResult = "text";
+ String expectedWithCode = "
";
+ String random = UUID.randomUUID().toString();
+
+ assertEquals(expectedResult, YamlUtil.cleanupHtml(expectedActual));
+ assertEquals(random + expectedResult + random, YamlUtil.cleanupHtml(random + expectedActual + random));
+ assertEquals(expectedResult + random + expectedResult, YamlUtil.cleanupHtml(expectedActual + random + expectedActual));
+ assertEquals(expectedWithCode, YamlUtil.cleanupHtml(expectedWithCode));
+ }
- String result = YamlUtil.convertHtmlToMarkdown(text);
+ @Test
+ public void cleanupHtmlIncludePrettyPrintTest() {
+ String expectedActual = "text
";
+ String expectedResult = "
";
+ String random = UUID.randomUUID().toString();
- assertEquals("Wrong result", result, expectedResult);
+ assertEquals(expectedResult, YamlUtil.cleanupHtml(expectedActual));
+ assertEquals(random + expectedResult + random, YamlUtil.cleanupHtml(random + expectedActual + random));
+ assertEquals(expectedResult + random + expectedResult, YamlUtil.cleanupHtml(expectedActual + random + expectedActual));
+ assertNotEquals(expectedResult, YamlUtil.cleanupHtml("
" + random + "
"));
+ assertFalse(YamlUtil.cleanupHtml("
" + random + "
").contains("class=\"pretty-print\""));
}
@Test
- public void convertHtmlToMarkdownForBlankParam() {
- assertNull("Wrong result for null", YamlUtil.convertHtmlToMarkdown(null));
- assertEquals("Wrong result for empty string", YamlUtil.convertHtmlToMarkdown(""), "");
+ public void cleanupHtmlAddCodeTagsTest() {
+ String expectedActual = "`text`";
+ String expectedResult = "
text
";
+ String random = UUID.randomUUID().toString();
+
+ assertEquals(expectedResult, YamlUtil.cleanupHtml(expectedActual));
+ assertEquals(random + expectedResult + random, YamlUtil.cleanupHtml(random + expectedActual + random));
+ assertEquals(expectedResult + random + expectedResult, YamlUtil.cleanupHtml(expectedActual + random + expectedActual));
+ assertEquals("`" + expectedResult, YamlUtil.cleanupHtml("`" + expectedActual));
+ assertFalse(YamlUtil.cleanupHtml("`" + random).contains(""));
}
- private MetadataFileItem buildMetadataFileItem(int seed) {
- MetadataFileItem metadataFileItem = new MetadataFileItem("Some uid " + seed);
- metadataFileItem.setId("Some id" + seed);
- metadataFileItem.setHref("Some href" + seed);
- metadataFileItem.setParameters(Collections.singletonList(
- new MethodParameter("Some id " + seed, "Some type " + seed, "Some desc " + seed)));
+ @Test
+ public void cleanupHtmlAddHrefTagsTest() {
+ String expectedActual = "[text](link)";
+ String expectedResult = "text";
+ String random = UUID.randomUUID().toString();
- return metadataFileItem;
+ assertEquals(expectedResult, YamlUtil.cleanupHtml(expectedActual));
+ assertEquals(random + expectedResult + random, YamlUtil.cleanupHtml(random + expectedActual + random));
+ assertEquals(expectedResult + random + expectedResult, YamlUtil.cleanupHtml(expectedActual + random + expectedActual));
+ assertEquals("[text]](link)", YamlUtil.cleanupHtml("[text]](link)"));
+ assertFalse(YamlUtil.cleanupHtml("[text(link)]").contains("href"));
+ }
+
+ @Test
+ public void cleanupHtmlEqualTitlesTest() {
+ String expectedActual = "======================= SpeechClient =======================";
+ String expectedResult = "
SpeechClient
";
+ String random = UUID.randomUUID().toString();
+
+ assertEquals(expectedResult, YamlUtil.cleanupHtml(expectedActual));
+ assertEquals(random + expectedResult + random, YamlUtil.cleanupHtml(random + expectedActual + random));
+ assertEquals(expectedResult + random + expectedResult, YamlUtil.cleanupHtml(expectedActual + random + expectedActual));
+ assertEquals("= text =", YamlUtil.cleanupHtml("= text ="));
+ }
+
+ @Test
+ public void cleanupHtmlReferenceTest() {
+ String expectedActual = "[KeyRing][google.cloud.kms.v1.KeyRing]";
+ String expectedResult = "\n
"
syntax:
content: "public interface BetaApi implements Annotation"
implements:
diff --git a/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.ProductSearchSettings.Builder.yml b/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.ProductSearchSettings.Builder.yml
index a61c9d27..1f691bf3 100644
--- a/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.ProductSearchSettings.Builder.yml
+++ b/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.ProductSearchSettings.Builder.yml
@@ -171,7 +171,7 @@ items:
overload: "com.microsoft.samples.google.ProductSearchSettings.Builder.applyToAllUnaryMethods*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "Applies the given settings updater function to all of the unary API methods in this service.\n\nNote: This method does not support applying settings to streaming methods."
+ summary: "Applies the given settings updater function to all of the unary API methods in this service.\n\n \n
\n\n
"
syntax:
content: "public class ProductSearchSettings extends ClientSettings\n ProductSearchSettings.Builder productSearchSettingsBuilder = ProductSearchSettings.newBuilder();\n productSearchSettingsBuilder\n .createProductSetSettings()\n .setRetrySettings(\n productSearchSettingsBuilder\n .createProductSetSettings()\n .getRetrySettings()\n .toBuilder()\n .setTotalTimeout(Duration.ofSeconds(30))\n .build());\n ProductSearchSettings productSearchSettings = productSearchSettingsBuilder.build();\n
RecognitionConfig
.\n Either content
or uri
must be supplied. Supplying both or neither\n returns google.cloud.speech.v1.RecognitionAudio
"
syntax:
content: "public final class RecognitionAudio extends GeneratedMessageV3 implements RecognitionAudioOrBuilder"
inheritance:
@@ -220,7 +220,7 @@ items:
overload: "com.microsoft.samples.google.RecognitionAudio.getContent*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "```\nThe audio data bytes encoded as specified in\n `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a\n pure binary representation, whereas JSON representations use base64.\n```\n\n`bytes content = 1;`"
+ summary: "\n The audio data bytes encoded as specified in\n RecognitionConfig
. Note: as with all bytes fields, proto buffers use a\n pure binary representation, whereas JSON representations use base64.\n \n\n bytes content = 1;
"
syntax:
content: "public ByteString getContent()"
return:
@@ -284,7 +284,7 @@ items:
overload: "com.microsoft.samples.google.RecognitionAudio.getUri*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "```\nURI that points to a file that contains audio data bytes as specified in\n `RecognitionConfig`. The file must not be compressed (for example, gzip).\n Currently, only Google Cloud Storage URIs are\n supported, which must be specified in the following format:\n `gs://bucket_name/object_name` (other URI formats return\n [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see\n [Request URIs](https://cloud.google.com/storage/docs/reference-uris).\n```\n\n`string uri = 2;`"
+ summary: "\n URI that points to a file that contains audio data bytes as specified in\n RecognitionConfig
. The file must not be compressed (for example, gzip).\n Currently, only Google Cloud Storage URIs are\n supported, which must be specified in the following format:\n gs://bucket_name/object_name
(other URI formats return\n string uri = 2;
"
syntax:
content: "public String getUri()"
return:
@@ -301,7 +301,7 @@ items:
overload: "com.microsoft.samples.google.RecognitionAudio.getUriBytes*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "```\nURI that points to a file that contains audio data bytes as specified in\n `RecognitionConfig`. The file must not be compressed (for example, gzip).\n Currently, only Google Cloud Storage URIs are\n supported, which must be specified in the following format:\n `gs://bucket_name/object_name` (other URI formats return\n [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see\n [Request URIs](https://cloud.google.com/storage/docs/reference-uris).\n```\n\n`string uri = 2;`"
+ summary: "\n URI that points to a file that contains audio data bytes as specified in\n RecognitionConfig
. The file must not be compressed (for example, gzip).\n Currently, only Google Cloud Storage URIs are\n supported, which must be specified in the following format:\n gs://bucket_name/object_name
(other URI formats return\n string uri = 2;
"
syntax:
content: "public ByteString getUriBytes()"
return:
@@ -318,7 +318,7 @@ items:
overload: "com.microsoft.samples.google.RecognitionAudio.hasContent*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "```\nThe audio data bytes encoded as specified in\n `RecognitionConfig`. Note: as with all bytes fields, proto buffers use a\n pure binary representation, whereas JSON representations use base64.\n```\n\n`bytes content = 1;`"
+ summary: "\n The audio data bytes encoded as specified in\n RecognitionConfig
. Note: as with all bytes fields, proto buffers use a\n pure binary representation, whereas JSON representations use base64.\n \n\n bytes content = 1;
"
syntax:
content: "public boolean hasContent()"
return:
@@ -335,7 +335,7 @@ items:
overload: "com.microsoft.samples.google.RecognitionAudio.hasUri*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "```\nURI that points to a file that contains audio data bytes as specified in\n `RecognitionConfig`. The file must not be compressed (for example, gzip).\n Currently, only Google Cloud Storage URIs are\n supported, which must be specified in the following format:\n `gs://bucket_name/object_name` (other URI formats return\n [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see\n [Request URIs](https://cloud.google.com/storage/docs/reference-uris).\n```\n\n`string uri = 2;`"
+ summary: "\n URI that points to a file that contains audio data bytes as specified in\n RecognitionConfig
. The file must not be compressed (for example, gzip).\n Currently, only Google Cloud Storage URIs are\n supported, which must be specified in the following format:\n gs://bucket_name/object_name
(other URI formats return\n string uri = 2;
"
syntax:
content: "public boolean hasUri()"
return:
diff --git a/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.SpeechClient.yml b/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.SpeechClient.yml
index 41aaf61c..8b438158 100644
--- a/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.SpeechClient.yml
+++ b/third_party/docfx-doclet-143274/src/test/resources/expected-generated-files/com.microsoft.samples.google.SpeechClient.yml
@@ -33,7 +33,7 @@ items:
fullName: "com.microsoft.samples.google.SpeechClient"
type: "Class"
package: "com.microsoft.samples.google"
- summary: "Service Description: Service that implements Google Cloud Speech API.\n\nThis class provides the ability to make remote calls to the backing service through method calls that map to API methods. Sample code to get started:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n```\n\nNote: close() needs to be called on the SpeechClient object to clean up resources such as threads. In the example above, try-with-resources is used, which automatically calls close().\n\nThe surface of this class includes several types of Java methods for each of the API's methods:\n\n1. A \"flattened\" method. With this type of method, the fields of the request type have been converted into function parameters. It may be the case that not all fields are available as parameters, and not every API method will have a flattened method entry point.\n2. A \"request object\" method. This type of method only takes one parameter, a request object, which must be constructed before the call. Not every API method will have a request object method.\n3. A \"callable\" method. This type of method takes no parameters and returns an immutable API callable object, which can be used to initiate calls to the service.\n\nSee the individual methods for example code.\n\nMany parameters require resource names to be formatted in a particular way. To assist with these names, this class includes a format method for each type of name, and additionally a parse method to extract the individual identifiers contained within names that are returned.\n\nThis class can be customized by passing in a custom instance of SpeechSettings to create(). For example:\n\nTo customize credentials:\n\n```java\nSpeechSettings speechSettings =\n SpeechSettings.newBuilder()\n .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))\n .build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n```\n\nTo customize the endpoint:\n\n```java\nSpeechSettings speechSettings = SpeechSettings.newBuilder().setEndpoint(myEndpoint).build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n```\n\nPlease refer to the GitHub repository's samples for more quickstart code snippets."
+ summary: "Service Description: Service that implements Google Cloud Speech API.\n\n
\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n
\n
\n\n
\n\n \n SpeechSettings speechSettings =\n SpeechSettings.newBuilder()\n .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))\n .build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n
\n\n \n SpeechSettings speechSettings = SpeechSettings.newBuilder().setEndpoint(myEndpoint).build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationFuture\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();\n }\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationFuture\n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n LongRunningRecognizeResponse response =\n speechClient.longRunningRecognizeAsync(config, audio).get();\n }\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final UnaryCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n ApiFuture
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n OperationFuture
"
syntax:
content: "public final RecognizeResponse recognize(RecognitionConfig config, RecognitionAudio audio)"
parameters:
@@ -363,7 +363,7 @@ items:
overload: "com.microsoft.samples.google.SpeechClient.recognize*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "Performs synchronous speech recognition: receive results after all audio has been sent and processed.\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n RecognizeResponse response = speechClient.recognize(request);\n }\n```"
+ summary: "Performs synchronous speech recognition: receive results after all audio has been sent and\n processed.\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n
"
syntax:
content: "public final RecognizeResponse recognize(RecognizeRequest request)"
parameters:
@@ -383,7 +383,7 @@ items:
overload: "com.microsoft.samples.google.SpeechClient.recognizeCallable*"
type: "Method"
package: "com.microsoft.samples.google"
- summary: "Performs synchronous speech recognition: receive results after all audio has been sent and processed.\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n ApiFuture future = speechClient.recognizeCallable().futureCall(request);\n // Do something.\n RecognizeResponse response = future.get();\n }\n```"
+ summary: "Performs synchronous speech recognition: receive results after all audio has been sent and\n processed.\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n RecognizeResponse response = speechClient.recognize(request);\n }\n
"
syntax:
content: "public final UnaryCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n ApiFuture
"
syntax:
content: "public final BidiStreamingCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n BidiStream
\n
\n\n
"
syntax:
content: "public class SpeechSettings extends ClientSettings\n SpeechSettings.Builder speechSettingsBuilder = SpeechSettings.newBuilder();\n speechSettingsBuilder\n .recognizeSettings()\n .setRetrySettings(\n speechSettingsBuilder\n .recognizeSettings()\n .getRetrySettings()\n .toBuilder()\n .setTotalTimeout(Duration.ofSeconds(30))\n .build());\n SpeechSettings speechSettings = speechSettingsBuilder.build();\n
\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n
\n
\n\n
\n\n \n SpeechSettings speechSettings =\n SpeechSettings.newBuilder()\n .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))\n .build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n
\n\n \n SpeechSettings speechSettings = SpeechSettings.newBuilder().setEndpoint(myEndpoint).build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationFuture\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();\n }\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationFuture\n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n LongRunningRecognizeResponse response =\n speechClient.longRunningRecognizeAsync(config, audio).get();\n }\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final UnaryCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n ApiFuture
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n OperationFuture
"
syntax:
content: "public final RecognizeResponse recognize(RecognitionConfig config, RecognitionAudio audio)"
parameters:
@@ -363,7 +363,7 @@ items:
overload: "com.microsoft.samples.google.v1beta.SpeechClient.recognize*"
type: "Method"
package: "com.microsoft.samples.google.v1beta"
- summary: "Performs synchronous speech recognition: receive results after all audio has been sent and processed.\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n RecognizeResponse response = speechClient.recognize(request);\n }\n```"
+ summary: "Performs synchronous speech recognition: receive results after all audio has been sent and\n processed.\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n
"
syntax:
content: "public final RecognizeResponse recognize(RecognizeRequest request)"
parameters:
@@ -383,7 +383,7 @@ items:
overload: "com.microsoft.samples.google.v1beta.SpeechClient.recognizeCallable*"
type: "Method"
package: "com.microsoft.samples.google.v1beta"
- summary: "Performs synchronous speech recognition: receive results after all audio has been sent and processed.\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n ApiFuture future = speechClient.recognizeCallable().futureCall(request);\n // Do something.\n RecognizeResponse response = future.get();\n }\n```"
+ summary: "Performs synchronous speech recognition: receive results after all audio has been sent and\n processed.\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n RecognizeResponse response = speechClient.recognize(request);\n }\n
"
syntax:
content: "public final UnaryCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n ApiFuture
"
syntax:
content: "public final BidiStreamingCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n BidiStream
\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n
\n
\n\n
\n\n \n SpeechSettings speechSettings =\n SpeechSettings.newBuilder()\n .setCredentialsProvider(FixedCredentialsProvider.create(myCredentials))\n .build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n
\n\n \n SpeechSettings speechSettings = SpeechSettings.newBuilder().setEndpoint(myEndpoint).build();\n SpeechClient speechClient = SpeechClient.create(speechSettings);\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationFuture\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n LongRunningRecognizeResponse response = speechClient.longRunningRecognizeAsync(request).get();\n }\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationFuture\n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n LongRunningRecognizeResponse response =\n speechClient.longRunningRecognizeAsync(config, audio).get();\n }\n
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final UnaryCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n ApiFuture
Operation.error
or an Operation.response
which contains a\n LongRunningRecognizeResponse
message. For more information on asynchronous speech\n recognition, see the how-to.\n\n
"
syntax:
content: "public final OperationCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n LongRunningRecognizeRequest request =\n LongRunningRecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .setOutputConfig(TranscriptOutputConfig.newBuilder().build())\n .build();\n OperationFuture
"
syntax:
content: "public final RecognizeResponse recognize(RecognitionConfig config, RecognitionAudio audio)"
parameters:
@@ -363,7 +363,7 @@ items:
overload: "com.microsoft.samples.google.v1p1alpha.SpeechClient.recognize*"
type: "Method"
package: "com.microsoft.samples.google.v1p1alpha"
- summary: "Performs synchronous speech recognition: receive results after all audio has been sent and processed.\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n RecognizeResponse response = speechClient.recognize(request);\n }\n```"
+ summary: "Performs synchronous speech recognition: receive results after all audio has been sent and\n processed.\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n
"
syntax:
content: "public final RecognizeResponse recognize(RecognizeRequest request)"
parameters:
@@ -383,7 +383,7 @@ items:
overload: "com.microsoft.samples.google.v1p1alpha.SpeechClient.recognizeCallable*"
type: "Method"
package: "com.microsoft.samples.google.v1p1alpha"
- summary: "Performs synchronous speech recognition: receive results after all audio has been sent and processed.\n\nSample code:\n\n```java\ntry (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n ApiFuture future = speechClient.recognizeCallable().futureCall(request);\n // Do something.\n RecognizeResponse response = future.get();\n }\n```"
+ summary: "Performs synchronous speech recognition: receive results after all audio has been sent and\n processed.\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n RecognizeResponse response = speechClient.recognize(request);\n }\n
"
syntax:
content: "public final UnaryCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognizeRequest request =\n RecognizeRequest.newBuilder()\n .setConfig(RecognitionConfig.newBuilder().build())\n .setAudio(RecognitionAudio.newBuilder().build())\n .build();\n ApiFuture
"
syntax:
content: "public final BidiStreamingCallable\n try (SpeechClient speechClient = SpeechClient.create()) {\n BidiStream
First
code block?\n Second
code block?"
syntax:
content: "public interface Display\n
\n\n This is an \"at\" symbol: @"
syntax:
content: "public class Person SpeechClient
\n\n
\n\n \n try (SpeechClient speechClient = SpeechClient.create()) {\n RecognitionConfig config = RecognitionConfig.newBuilder().build();\n RecognitionAudio audio = RecognitionAudio.newBuilder().build();\n RecognizeResponse response = speechClient.recognize(config, audio);\n }\n
AdaptationClient
\n\n
"
syntax:
content: "package com.microsoft.samples"
references:
\n try (AdaptationClient adaptationClient = AdaptationClient.create()) {\n LocationName parent = LocationName.of(\"[PROJECT]\", \"[LOCATION]\");\n PhraseSet phraseSet = PhraseSet.newBuilder().build();\n String phraseSetId = \"phraseSetId959902180\";\n PhraseSet response = adaptationClient.createPhraseSet(parent, phraseSet, phraseSetId);\n }\n