You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/docs/Android/getting-started.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -134,7 +134,7 @@ We've listed all possible speech events here; see [the documentation](https://ww
134
134
135
135
If the event is `RECOGNIZE`, `context.transcript` will give you the raw text of what the user just said. Translating that raw text into an action in your app is the job of an NLU, or natural language understanding, component. Spokestack currently leaves the choice of NLU up to the app: There's a variety of NLU services out there ([DialogFlow](https://dialogflow.com/), [LUIS](https://www.luis.ai/home), or [wit.ai](https://wit.ai/), to name a few), or, if your app is simple enough, you can make your own with string matching or regular expressions.
136
136
137
-
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; [sign up for our newsletter](LINK) to be the first to know when it's ready.
137
+
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; we'll update this space when it's ready.
138
138
139
139
For the sake of our demo, though, let's say you're creating a voice-controlled timer. `handleSpeech` might look something like this:
The API key in this example sets you up to use the demo voice available for free with Spokestack; for more configuration options and details about controlling pronunciation, see [the TTS guide](tts).
205
+
The API key in this example sets you up to use the demo voice available for free with Spokestack; for more configuration options and details about controlling pronunciation, see [the TTS guide](/docs/Concepts/tts).
Copy file name to clipboardExpand all lines: content/docs/Concepts/tts.md
+12-7Lines changed: 12 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,10 +23,12 @@ Note that long inputs should be split into separate `s` ("sentence") elements fo
23
23
Currently, Spokestack is focused on pronunciation of English words and loan words/foreign words common in spoken English and thus restricts its character set from the full range of [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) characters. Characters valid for an IPA `ph` attribute are:
@@ -35,11 +37,14 @@ Using invalid characters will not cause an error, but it might result in unexpec
35
37
36
38
### Some brief examples
37
39
38
-
- when you just can't give up that web prefix:
40
+
- When you just can't give up that web prefix:
41
+
39
42
`<speak>See all our products at <say-as interpret-as="characters">www</say-as> dot my company dot com</speak>`
40
43
41
-
- insert a pregnant pause:
44
+
- Insert a pregnant pause:
45
+
42
46
`<speak>Today's stock price <break time="500ms"/> fell three percent.</speak>`
43
47
44
-
- customize pronunciation to make a point:
48
+
- Customize pronunciation to make a point:
49
+
45
50
`<speak>I don't care what you say; it's pronounced <phoneme alphabet="ipa" ph="gɪf">gif</phoneme>, not <phoneme alphabet="ipa" ph="dʒɪf">gif</phoneme>!</speak>`
Copy file name to clipboardExpand all lines: content/docs/iOS/getting-started.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -113,7 +113,7 @@ All we're doing here is reflecting system events back to the main pipeline. See
113
113
114
114
Inside `didRecognize`, `result.transcript` will give you the raw text of what the user just said. Translating that raw text into an action in your app is the job of an NLU, or natural language understanding, component. Spokestack currently leaves the choice of NLU up to the app: There's a variety of NLU services out there ([DialogFlow](https://dialogflow.com/), [LUIS](https://www.luis.ai/home), or [wit.ai](https://wit.ai/), to name a few), or, if your app is simple enough, you can make your own with string matching or regular expressions.
115
115
116
-
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; [sign up for our newsletter](LINK) to be the first to know when it's ready.
116
+
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; we'll update this space when it's ready.
117
117
118
118
For the sake of our demo, though, let's say you're creating a voice-controlled timer. `didRecognize` might look something like this:
0 commit comments