2-Way Audio SDK for Android
Introduction
The Eagle Eye 2-Way Audio Android SDK is a wrapper for the communication between a client (ie. Android app) and the 2-Way Audio Signaling Services using WebRTC. Developers can use the exposed methods to easily establish and disconnect sessions.
This SDK makes it possible to send audio to the speakers available on the account.
Installation
In order to add 2-Way Audio Android SDK to your project you will need to create a GitHub token and setup your project with the following steps:
- Open the Github website and login with your account.
- Go to [Your Profile] => [Developer settings] => [Personal access token] => [Generate new
token] on github.com - Check “read:packages”
- Click “Generate token”
- Copy the generated token
- Add below lines to your “~/.gradle/gradle.properties”
GITHUB_ID={your github id}
GITHUB_PACKAGES_TOKEN={the generated token}
- Merge the below lines to your “/settings.gradle” (If you are using the recent version of the Android Gradle Plugin)
dependencyResolutionManagement {
repositories {
maven {
url
"https://maven.pkg.github.com/EENCloud/VMS-Developer-Portal"
credentials {
username GITHUB_ID
password GITHUB_PACKAGES_TOKEN
}
}
}
}
- Alternatively add to your “/build.gradle”
allprojects {
repositories {
maven {
url
"https://maven.pkg.github.com/EENCloud/VMS-Developer-Portal"
credentials {
username GITHUB_ID
password GITHUB_PACKAGES_TOKEN
}
}
}
}
- Finally add an artifact to use to your “/build.gradle”
dependencies {
implementation
"com.een.een-two-way-audio-android-sdk.x.y.z"
}
In the case that GitHub packages are not working for you or you don’t want to use them, you can download aar file directly from the GitHub packages page (URL). The aar file is under the Assets section.
Initial Requirements
● Add the een-two-way-audio-android-sdk-x.z.y.aar in your project.
● If you’re using R8 add following lines to your proguard-rules.pro:
-keep class org.webrtc.** { *; }
-keepclasseswithmembernames class * { native <methods>; }
-keep class com.een.twowayaudio.signaling.model.** { *; }
In order to be able to use the 2-Way Audio feature the account should have the following already:
● For any API call you should already be logged in and have access token available to authenticate the API calls
● The account should have a camera that is associated with the speaker
● The bridge version should be 3.8.0 or higher as well as current camera support drivers
Note
● When a speaker device is detectable by more than one bridge, all bridges must support using the speaker. They all need bridge version 3.8.0 or higher as well as current camera-support drivers. Contact Eagle Eye Support for help getting the latest version.
● The maximum length of 2-Way Audio session is 10 minutes. After 10 minutes session will be disconnected.
The /cameras API response will have "speakerId
" value for all cameras with associated speakers.
Below is an example response from cameras API with associated speaker:
{
"nextPageToken": "",
"prevPageToken": "",
"totalSize": 12,
"results": [
{
"id": "1002c1f6",
"accountId": "00032511",
"name": "testNormalCam",
"bridgeId": "1003171a",
"locationId": "7803a2f4-32a8-43ff-816d-eb8c03cb739f",
"speakerId": "100ea79e"
},
]
}
Getting the webRTC URL
Get the WebRTC URL, device esn, and media type using the/feeds API.
curl --location --request GET '<baseUrl>/api/v3.0/feeds?deviceId=1002c1f6&type=talkdown&include=webRtcUrl' \
--header 'Accept: application/json' \
--header 'Authorization: Bearer <access token>'
{
"nextPageToken": "",
"prevPageToken": "",
"results": [
{
"id": "1002c1f6-talkdown",
"type": "talkdown",
"mediaType": "halfDuplex",
"webRtcUrl": "wss://edge.c013.eagleeyenetworks.com"
}
]
}
Initialize the SDK
Initialize the TwoWayAudioClient with a TwoWayAudioModel:
val twoWayAudioClient = TwoWayAudioClient(
twoWayAudioModel = TwoWayAudioModel(
webRtcUri = URI(twoWayAudioInfo.webRtcUrl),
sourceID = twoWayAudioInfo.deviceId,
authKey = cookies.authKey
),
context = this
)
Handle Permissions
You have to manually check the recording permission for a working peer connection:
companion object {
private const val PERMISSIONS_REQUEST_CODE = 1
private val PERMISSIONS = listOf(
Manifest.permission.RECORD_AUDIO,
)
}
private fun checkTwoWayAudioPermissions() {
val permissions = PERMISSIONS.toTypedArray()
if (hasPermissions(*permissions)) {
twoWayAudioClient.connect()
} else {
ActivityCompat.requestPermissions(this, permissions,
PERMISSIONS_REQUEST_CODE)
}
}
private fun hasPermissions(vararg permissions: String): Boolean =
permissions.all {
ActivityCompat.checkSelfPermission(this, it) ==
PackageManager.PERMISSION_GRANTED
}
Override onRequestPermissionsResult method inside your activity:
override fun onRequestPermissionsResult(
requestCode: Int,
permissions: Array<out String>,
grantResults: IntArray
) {
super.onRequestPermissionsResult(requestCode, permissions,
grantResults)
if (requestCode != PERMISSIONS_REQUEST_CODE) {
return
}
val permissionGranted = grantResults.all {
it == PackageManager.PERMISSION_GRANTED
}
if (permissionGranted) {
twoWayAudioClient.connect()
} else {
twoWayAudioClient.onPermissionNotGranted()
}
}
Implement flows
lifecycleScope.launch {
twoWayAudioClient.status.collect {
when (it) {
is TwoWayAudioStatus.Initial -> {
applyTwoWayAudioDefaultState()
}
is TwoWayAudioStatus.Connecting -> {
applyTwoWayAudioConnectingState()
}
is TwoWayAudioStatus.Connected -> {
applyTwoWayAudioConnectedState()
}
is TwoWayAudioStatus.Disconnected -> {
when (val disconnectReason = it.disconnectReason) {
is TwoWayAudioDisconnectReason.Manual -> {
showSpeakerDisconnectedSnackbar()
applyCrossedMicrophoneTimer()
}
is TwoWayAudioDisconnectReason.NoPermission -> {
showMicrophonePermissionRequiredSnackbar()
applyCrossedMicrophoneTimer()
}
is TwoWayAudioDisconnectReason.Error -> {
when (disconnectReason.error) {
is TwoWayAudioError.Timeout,
is TwoWayAudioError.Generic -> {
showSpeakerDisconnectedSnackbar()
applyCrossedMicrophoneTimer()
}
is TwoWayAudioError.Busy -> {
showSpeakerBusySnackbar()
applyCrossedMicrophoneTimer()
}
}
}
}
}
}
}
}
lifecycleScope.launch {
twoWayAudioClient.audioLevel.collect {
// handle amplitude animation
}
}
Connect
To establish the webRTC connection, just call the connect() method and wait for the status to emit either the connected or error state.
twoWayAudio.connect()
Disconnect
To close the webRTC connection, just call the disconnect() method and wait for the status to emit the disconnected state.
twoWayAudio.disconnect()
Models
To initialize the SDK you need to provide the following model:
data class TwoWayAudioModel(
val webRtcUri: URI,
val sourceID: String,
val authKey: String,
val type: TwoWayAudioType = TwoWayAudioType.Talkdown,
val mediaType: TwoWayAudioMediaType =
TwoWayAudioMediaType.HalfDuplex,
)
sealed class TwoWayAudioType(val name: String) {
object Talkdown : TwoWayAudioType("talkdown")
}
sealed class TwoWayAudioMediaType(val name: String) {
object HalfDuplex : TwoWayAudioMediaType("halfDuplex")
}
Variable Name | Type | Description | Example |
---|---|---|---|
sourceID | String | The ESN of the two way audio device(Camera/Speaker). | "100abcde" |
webRtcUri | URI | The websocket URL of edge service. The url is composed according to the "wss://edge.<speaker_account_cluster>.eagleeyenetworks.com" pattern. | URI("wss: //edge.c000.eagleeyenetworks.com") |
type | Two Way Audio Type | The service type of web RTC, here the value is always "talkdown". | TwoWayAudioType.Talkdown |
mediaType | TwoWayAudioMediaType | The media type string of two way audio, here only support "fullDuplex" and "halfDuplex" | TwowayAudioMediaType.HalfDuplex |
authKey | String | The AuthKey needed to authorize the connection | “----” |
The 2-Way Audio SDK can be in one of the self explaining states:
sealed class TwoWayAudioStatus {
object Initial : TwoWayAudioStatus()
class Disconnected(val disconnectReason:
TwoWayAudioDisconnectReason) : TwoWayAudioStatus()
object Connecting : TwoWayAudioStatus()
object Connected : TwoWayAudioStatus()
}
sealed class TwoWayAudioDisconnectReason {
object Manual : TwoWayAudioDisconnectReason()
object NoPermission : TwoWayAudioDisconnectReason()
class Error(val error: TwoWayAudioError) :
TwoWayAudioDisconnectReason()
}
sealed class TwoWayAudioError {
object Busy : TwoWayAudioError()
object Timeout : TwoWayAudioError()
object Generic : TwoWayAudioError()
}
Methods
The SDK provides the following public methods:
fun connect()
This method will start an audio session:
● It will start the connection with the signaling server.
● Once the connection is established, it will authorize it.
● Once the connection is authorized, it will request the Ice Servers,
● Once we receive the Ice Servers, the webRTC client will send an offer to the server.
● Once the offer is sent, we expect an answer from the server and the session will be created.
fun disconnect()
It will disconnect Signaling and webRTC clients and clean the memory.
Flows:
The SDK provides the following flows:
State Flow
This flow allows the developer to keep track of all the connection status updates.
val status: StateFlow<TwoWayAudioStatus>
Shared Flow
This flow allows the developer to keep track of microphone amplitude value, allowing the
developer to implement synchronized animations.
val audioLevel: SharedFlow<Double>
Updated over 1 year ago