Skip to content

Commit 741b8c1

Browse files
space-popetimmywil
authored andcommitted
docs: remove/update bad links; adjust formatting (#17)
1 parent 9fcc2eb commit 741b8c1

File tree

3 files changed

+15
-10
lines changed

3 files changed

+15
-10
lines changed

content/docs/Android/getting-started.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -134,7 +134,7 @@ We've listed all possible speech events here; see [the documentation](https://ww
134134

135135
If the event is `RECOGNIZE`, `context.transcript` will give you the raw text of what the user just said. Translating that raw text into an action in your app is the job of an NLU, or natural language understanding, component. Spokestack currently leaves the choice of NLU up to the app: There's a variety of NLU services out there ([DialogFlow](https://dialogflow.com/), [LUIS](https://www.luis.ai/home), or [wit.ai](https://wit.ai/), to name a few), or, if your app is simple enough, you can make your own with string matching or regular expressions.
136136

137-
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; [sign up for our newsletter](LINK) to be the first to know when it's ready.
137+
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; we'll update this space when it's ready.
138138

139139
For the sake of our demo, though, let's say you're creating a voice-controlled timer. `handleSpeech` might look something like this:
140140

@@ -202,7 +202,7 @@ class MyActivity : AppCompatActivity(), OnSpeechEventListener, TTSCallback {
202202
}
203203
```
204204

205-
The API key in this example sets you up to use the demo voice available for free with Spokestack; for more configuration options and details about controlling pronunciation, see [the TTS guide](tts).
205+
The API key in this example sets you up to use the demo voice available for free with Spokestack; for more configuration options and details about controlling pronunciation, see [the TTS guide](/docs/Concepts/tts).
206206

207207
## Conclusion
208208

content/docs/Concepts/tts.md

Lines changed: 12 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,12 @@ Note that long inputs should be split into separate `s` ("sentence") elements fo
2323
Currently, Spokestack is focused on pronunciation of English words and loan words/foreign words common in spoken English and thus restricts its character set from the full range of [IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) characters. Characters valid for an IPA `ph` attribute are:
2424

2525
```bash
26-
[' ', ',', 'a', 'b', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm',
27-
'n', 'o', 'p', 'r', 's', 't', 'u', 'v', 'w', 'z', 'æ', 'ð', 'ŋ', 'ɑ',
28-
'ɔ', 'ə', 'ɛ', 'ɝ', 'ɪ', 'ʃ', 'ʊ', 'ʌ', 'ʒ', 'ˈ', 'ˌ', 'ː', 'θ', 'ɡ',
29-
'x', 'y', 'ɹ', 'ʰ', 'ɜ', 'ɒ', 'ɚ', 'ɱ', 'ʔ', 'ɨ', 'ɾ', 'ɐ', 'ʁ', 'ɵ', 'χ']
26+
[' ', ',', 'a', 'b', 'd', 'e', 'f', 'g', 'h', 'i', 'j',
27+
'k', 'l', 'm', 'n', 'o', 'p', 'r', 's', 't', 'u', 'v',
28+
'w', 'z', 'æ', 'ð', 'ŋ', 'ɑ', 'ɔ', 'ə', 'ɛ', 'ɝ', 'ɪ',
29+
'ʃ', 'ʊ', 'ʌ', 'ʒ', 'ˈ', 'ˌ', 'ː', 'θ', 'ɡ', 'x', 'y',
30+
'ɹ', 'ʰ', 'ɜ', 'ɒ', 'ɚ', 'ɱ', 'ʔ', 'ɨ', 'ɾ', 'ɐ', 'ʁ',
31+
'ɵ', 'χ']
3032
```
3133

3234
and the emphasis symbols `ˈ`, `,`, `ˌ`, and `ː`.
@@ -35,11 +37,14 @@ Using invalid characters will not cause an error, but it might result in unexpec
3537

3638
### Some brief examples
3739

38-
- when you just can't give up that web prefix:
40+
- When you just can't give up that web prefix:
41+
3942
`<speak>See all our products at <say-as interpret-as="characters">www</say-as> dot my company dot com</speak>`
4043

41-
- insert a pregnant pause:
44+
- Insert a pregnant pause:
45+
4246
`<speak>Today's stock price <break time="500ms"/> fell three percent.</speak>`
4347

44-
- customize pronunciation to make a point:
48+
- Customize pronunciation to make a point:
49+
4550
`<speak>I don't care what you say; it's pronounced <phoneme alphabet="ipa" ph="gɪf">gif</phoneme>, not <phoneme alphabet="ipa" ph="dʒɪf">gif</phoneme>!</speak>`

content/docs/iOS/getting-started.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ All we're doing here is reflecting system events back to the main pipeline. See
113113

114114
Inside `didRecognize`, `result.transcript` will give you the raw text of what the user just said. Translating that raw text into an action in your app is the job of an NLU, or natural language understanding, component. Spokestack currently leaves the choice of NLU up to the app: There's a variety of NLU services out there ([DialogFlow](https://dialogflow.com/), [LUIS](https://www.luis.ai/home), or [wit.ai](https://wit.ai/), to name a few), or, if your app is simple enough, you can make your own with string matching or regular expressions.
115115

116-
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; [sign up for our newsletter](LINK) to be the first to know when it's ready.
116+
We know that NLU is an important piece of the puzzle, and we're working on a full-featured NLU component for Spokestack based on years of research and lessons learned from working with the other services; we'll update this space when it's ready.
117117

118118
For the sake of our demo, though, let's say you're creating a voice-controlled timer. `didRecognize` might look something like this:
119119

0 commit comments

Comments
 (0)