Skip to content

Conversation

poretsky
Copy link

Just an idea. It seems, user interactions would be noticeably more
effective if there was a capability to assign different voices for
speaking elements of different types, for instance, changing voice
pitch. Such changes are detected instantly, thus, listening to an
element content, user would already know its role. It is especially
valuable when browsing web pages.

Of course, earcons serve just the same purpose and they really help in
many cases, but not everywhere. On the other hand, when user attention
is concentrated on a content, voice pitch changes generally have less
chances to be overlooked.

More element types distinguishable by voice.
@svtsvet
Copy link

svtsvet commented Mar 22, 2020

i suppose it's interesting and useful feature.
It can accelerate interaction with app. But it's very difficult for the most of users because there are many different voices. may be it will be easier with the different sounds?

@poretsky
Copy link
Author

poretsky commented Mar 22, 2020 via email

@svtsvet
Copy link

svtsvet commented Mar 24, 2020 via email

@liangxiwei
Copy link

I want to say,If the project can not build with Android-Studio like third part app,do not open source.

@devinprater
Copy link

This would be a very good feature, especially for formatting changes in text, like italics and bold.

@poretsky
Copy link
Author

poretsky commented Aug 7, 2021 via email

@PatrykMis
Copy link

Great feature, I love your idea. Considering merge (cherry-pick) to TalkBack FOSS fork.

Unfortunately, Google doesn't have interest and attention to PRs on this repo, according to this comment

@amirmahdifard
Copy link

@poretsky hi. Can you please tell me how to build this in windows android studio?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants