API Level: 14
Android 4.0 (Ice Cream Sandwich) is a major platform release that adds new capabilities for users and developers. The sections below provide an overview of the new features and developer APIs.
For developers, the Android 4.0 platform is available as a downloadable component for the Android SDK. The downloadable platform includes an Android library and system image, as well as a set of emulator skins and more. The downloadable platform includes no external libraries.
To start developing or testing against Android 4.0, use the Android SDK Manager to download the platform into your SDK. For more information, see Adding SDK Components. If you are new to Android, download the SDK Starter Package first.
Reminder: If you've already published an Android application, please test your application on Android 4.0 as soon as possible to be sure your application provides the best experience possible on the latest Android-powered devices.
For a high-level introduction to the new user and developer features in Android 4.0, see the Platform Highlights.
To determine what revision of the Android 4.0 platform you have installed, refer to the "Installed Packages" listing in the Android SDK Manager.
Android 4.0, Revision 1 (October 2011)
The sections below provide a technical overview of new APIs in Android 4.0.
The contact APIs that are defined by the ContactsContract
provider have
been extended to support new features such as a personal profile for the device owner, high
resolution contact photos, and the ability for users to invite individual contacts to social
networks that are installed on the device.
Android now includes a personal profile that represents the device owner, as defined by the
ContactsContract.Profile
table. Social apps that maintain a user identity
can contribute to the user's profile data by creating a new ContactsContract.RawContacts
entry within the ContactsContract.Profile
. That is, raw contacts that represent the device user do
not belong in the traditional raw contacts table defined by the ContactsContract.RawContacts
Uri; instead, you must add a profile raw contact in
the table at CONTENT_RAW_CONTACTS_URI
. Raw
contacts in this table are then aggregated into the single user-visible profile labeled "Me".
Adding a new raw contact for the profile requires the WRITE_PROFILE
permission. Likewise, in order to read from the profile
table, you must request the READ_PROFILE
permission. However,
most apps should need to read the user profile, even when contributing data to the
profile. Reading the user profile is a sensitive permission and you should expect users to be
skeptical of apps that request it.
Android now supports high resolution photos for contacts. Now, when you push a photo into a
contact record, the system processes it into both a 96x96 thumbnail (as it has previously) and a
256x256 "display photo" that's stored in a new file-based photo store (the exact dimensions that the
system chooses may vary in the future). You can add a large photo to a contact by putting a large
photo in the usual PHOTO
column of a
data row, which the system will then process into the appropriate thumbnail and display photo
records.
The INVITE_CONTACT
intent action allows an app
to invoke an action that indicates the user wants to add a contact to a social network. The app
receiving the app uses it to invite the specified contact to that
social network. Most apps will be on the receiving-end of this operation. For example, the
built-in People app invokes the invite intent when the user selects "Add connection" for a specific
social app that's listed in a person's contact details.
To make your app visible as in the "Add connection" list, your app must provide a sync adapter to
sync contact information from your social network. You must then indicate to the system that your
app responds to the INVITE_CONTACT
intent by
adding the inviteContactActivity
attribute to your app’s sync configuration file, with a
fully-qualified name of the activity that the system should start when sending the invite intent.
The activity that starts can then retrieve the URI for the contact in question from the intent’s
data and perform the necessary work to invite that contact to the network or add the person to the
user’s connections.
See the Sample Sync Adapter app for an example (specifically, see the contacts.xml file).
The new ContactsContract.DataUsageFeedback
APIs allow you to help track
how often the user uses particular methods of contacting people, such as how often the user uses
each phone number or e-mail address. This information helps improve the ranking for each contact
method associated with each person and provide better suggestions for contacting each person.
The new calendar APIs allow you to access and modify the user’s calendars and events using the Calendar Provider. You can read, add, modify and delete calendars, events, attendees, reminders and alerts.
A variety of apps and widgets can use these APIs to read and modify calendar events. However, some of the most compelling use cases are sync adapters that synchronize the user's calendar from other calendar services with the Calendar Provider, in order to offer a unified location for all the user's events. Google Calendar, for example, uses a sync adapter to synchronize Google Calendar events with the Calendar Provider, which can then be viewed with Android's built-in Calendar app.
The data model for calendars and event-related information in the Calendar Provider is
defined by CalendarContract
. All the user’s calendar data is stored in a
number of tables defined by various subclasses of CalendarContract
:
CalendarContract.Calendars
table holds the calendar-specific
information. Each row in this table contains the details for a single calendar, such as the name,
color, sync information, and so on.CalendarContract.Events
table holds event-specific information.
Each row in this table contains the information for a single event, such as the
event title, location, start time, end time, and so on. The event can occur one time or recur
multiple times. Attendees, reminders, and extended properties are stored in separate tables and
use the event’s _ID
to link them with the event.CalendarContract.Instances
table holds the start and end time for
occurrences of an event. Each row in this table represents a single occurrence. For one-time events
there is a one-to-one mapping of instances to events. For recurring events, multiple rows are
automatically generated to correspond to the multiple occurrences of that event.CalendarContract.Attendees
table holds the event attendee or guest
information. Each row represents a single guest of an event. It specifies the type of guest the
person is and the person’s response for the event.CalendarContract.Reminders
table holds the alert/notification data.
Each row represents a single alert for an event. An event can have multiple reminders. The number of
reminders per event is specified in MAX_REMINDERS
, which is set by the sync adapter that
owns the given calendar. Reminders are specified in number-of-minutes before the event is
scheduled and specify an alarm method such as to use an alert, email, or SMS to remind
the user.CalendarContract.ExtendedProperties
table hold opaque data fields
used by the sync adapter. The provider takes no action with items in this table except to delete
them when their related events are deleted.To access a user’s calendar data with the Calendar Provider, your application must request
the READ_CALENDAR
permission (for read access) and
WRITE_CALENDAR
(for write access).
If all you want to do is add an event to the user’s calendar, you can use an
ACTION_INSERT
intent with a "vnd.android.cursor.item/event"
MIME type to start an activity in the Calendar app that creates new events. Using the intent does
not require any permission and you can specify event details with the following extras:
Events.TITLE
: Name for the
eventCalendarContract.EXTRA_EVENT_BEGIN_TIME
:
Event begin time in milliseconds from the
epochCalendarContract.EXTRA_EVENT_END_TIME
: Event
end time in milliseconds from the epochEvents.EVENT_LOCATION
:
Location of the eventEvents.DESCRIPTION
: Event
descriptionIntent.EXTRA_EMAIL
: Email addresses of those to
inviteEvents.RRULE
: The recurrence
rule for the eventEvents.ACCESS_LEVEL
:
Whether the event is private or publicEvents.AVAILABILITY
:
Whether the time period of this event allows for other events to be scheduled at the same timeThe new voicemail APIs allows applications to add voicemails to a content provider on the device. Because the APIs currently do not allow third party apps to read all the voicemails from the system, the only third-party apps that should use the voicemail APIs are those that have voicemail to deliver to the user. For instance, it’s possible that a user has multiple voicemail sources, such as one provided by the phone’s service provider and others from VoIP or other alternative voice services. These apps can use the APIs to add their voicemails to the system for quick playback. The built-in Phone application presents all voicemails from the Voicemail Provider with a single list. Although the system’s Phone application is the only application that can read all the voicemails, each application that provides voicemails can read those that it has added to the system (but cannot read voicemails from other services).
The VoicemailContract
class defines the content provider for the
voicemail APIs. The subclasses VoicemailContract.Voicemails
and VoicemailContract.Status
provide tables in which the Voicemail Providers can
insert voicemail data for storage on the device. For an example of a voicemail provider app, see the
Voicemail Provider
Demo.
The Camera
class now includes APIs for detecting faces and controlling
focus and metering areas.
Camera apps can now enhance their abilities with Android’s face detection APIs, which not only detect the face of a subject, but also specific facial features, such as the eyes and mouth.
To detect faces in your camera application, you must register a Camera.FaceDetectionListener
by calling setFaceDetectionListener()
. You can then start
your camera surface and start detecting faces by calling startFaceDetection()
.
When the system detects one or more faces in the camera scene, it calls the onFaceDetection()
callback in your
implementation of Camera.FaceDetectionListener
, including an array of
Camera.Face
objects.
An instance of the Camera.Face
class provides various information about
the face detected, including:
Rect
that specifies the bounds of the face, relative to the camera's
current field of viewPoint
objects that indicate where the eyes and mouth are
locatedCamera apps can now control the areas that the camera uses for focus and for metering white
balance
and auto-exposure. Both features use the new Camera.Area
class to specify
the region of the camera’s current view that should be focused or metered. An instance of the Camera.Area
class defines the bounds of the area with a Rect
and the area's weight—representing the level of importance of that
area, relative to other areas in consideration—with an integer.
Before setting either a focus area or metering area, you should first call getMaxNumFocusAreas()
or getMaxNumMeteringAreas()
, respectively. If these return zero, then
the device does not support the corresponding feature.
To specify the focus or metering areas to use, simply call setFocusAreas()
or setMeteringAreas()
. Each take a List
of Camera.Area
objects that indicate the areas to consider
for focus or metering. For example, you might implement a feature that allows the user to set the
focus area by touching an area of the preview, which you then translate to an Camera.Area
object and request that the camera focus on that area of the scene.
The focus or exposure in that area will continually update as the scene in the area changes.
takePicture()
to save a photo without interrupting the video session. Before doing so, you should
call isVideoSnapshotSupported()
to be sure the hardware
supports it.setAutoExposureLock()
and setAutoWhiteBalanceLock()
, to prevent
these properties from changing.Camera.ACTION_NEW_PICTURE
:
This indicates that the user has captured a new photo. The built-in Camera app invokes this
broadcast after a photo is captured and third-party camera apps should also broadcast this intent
after capturing a photo.Camera.ACTION_NEW_VIDEO
:
This indicates that the user has captured a new video. The built-in Camera app invokes this
broadcast after a video is recorded and third-party camera apps should also broadcast this intent
after capturing a video.Android 4.0 adds several new APIs for applications that interact with media such as photos, videos, and music.
MediaPlayer
now requires the INTERNET
permission. If you use MediaPlayer
to
play content from the Internet, be sure to add the INTERNET
permission to your manifest or else your media playback will not work beginning with Android
4.0.setSurface()
allows you define a Surface
to behave as the video sink.setDataSource()
allows you to
send additional HTTP headers with your request, which can be useful for HTTP(S) live streamingAndroid 4.0 adds support for:
For more info, see Supported Media Formats.
The new RemoteControlClient
allows media players to enable playback
controls from remote control clients such as the device lock screen. Media players can also expose
information about the media currently playing for display on the remote control, such as track
information and album art.
To enable remote control clients for your media player, instantiate a RemoteControlClient
with its constructor, passing it a PendingIntent
that broadcasts ACTION_MEDIA_BUTTON
. The intent must also declare the explicit BroadcastReceiver
component in your app that handles the ACTION_MEDIA_BUTTON
event.
To declare which media control inputs your player can handle, you must call setTransportControlFlags()
on your
RemoteControlClient
, passing a set of FLAG_KEY_MEDIA_*
flags, such as
FLAG_KEY_MEDIA_PREVIOUS
and FLAG_KEY_MEDIA_NEXT
.
You must then register your RemoteControlClient
by passing it to MediaManager.registerRemoteControlClient()
.
Once registered, the broadcast receiver you declared when you instantiated the RemoteControlClient
will receive ACTION_MEDIA_BUTTON
events when a button is pressed from a remote control. The intent you receive includes the KeyEvent
for the media key pressed, which you can retrieve from the intent with getParcelableExtra(Intent.EXTRA_KEY_EVENT)
.
To display information on the remote control about the media playing, call editMetaData()
and add metadata to the returned
RemoteControlClient.MetadataEditor
. You can supply a bitmap for media artwork,
numerical information such as elapsed time, and text information such as the track title. For
information on available keys see the METADATA_KEY_*
flags in MediaMetadataRetriever
.
For a sample implementation, see the Random Music Player, which provides compatibility logic such that it enables the remote control client on Android 4.0 devices while continuing to support devices back to Android 2.1.
A new media effects framework allows you to apply a variety of visual effects to images and videos. The system performs all effects processing on the GPU to obtain maximum performance. New applications for Android 4.0 such as Google Talk and the Gallery editor make use of the effects API to apply real-time effects to video and photos.
For maximum performance, effects are applied directly to OpenGL textures, so your application must have a valid OpenGL context before it can use the effects APIs. The textures to which you apply effects may be from bitmaps, videos or even the camera. However, there are certain restrictions that textures must meet:
GL_TEXTURE_2D
texture imageAn Effect
object defines a single media effect that you can apply to
an image frame. The basic workflow to create an Effect
is:
EffectContext.createWithCurrentGlContext()
from your OpenGL ES 2.0 context.EffectContext
to call EffectContext.getFactory()
, which returns an instance
of EffectFactory
.createEffect()
, passing it an
effect name from @link android.media.effect.EffectFactory}, such as EFFECT_FISHEYE
or EFFECT_VIGNETTE
.Not all devices support all effects, so you must first check if the desired effect is supported
by calling isEffectSupported()
.
You can adjust an effect’s parameters by calling setParameter()
and passing a parameter name and parameter value. Each type of effect accepts
different parameters, which are documented with the effect name. For example, EFFECT_FISHEYE
has one parameter for the scale
of the
distortion.
To apply an effect on a texture, call apply()
on the
Effect
and pass in the input texture, it’s width and height, and the output
texture. The input texture must be bound to a GL_TEXTURE_2D
texture
image (usually done by calling the glTexImage2D()
function). You may provide multiple mipmap levels. If the output texture has not been bound to a
texture image, it will be automatically bound by the effect as a GL_TEXTURE_2D
and with one mipmap level (0), which will have the same
size as the input.
Android now supports Bluetooth Health Profile devices, so you can create applications that use Bluetooth to communicate with health devices that support Bluetooth, such as heart-rate monitors, blood meters, thermometers, and scales.
Similar to regular headset and A2DP profile devices, you must call getProfileProxy()
with a BluetoothProfile.ServiceListener
and the HEALTH
profile type to establish a connection with the profile
proxy object.
Once you’ve acquired the Health Profile proxy (the BluetoothHealth
object), connecting to and communicating with paired health devices involves the following new
Bluetooth classes:
BluetoothHealthCallback
: You must extend this class and implement the
callback methods to receive updates about changes in the application’s registration state and
Bluetooth channel state.BluetoothHealthAppConfiguration
: During callbacks to your BluetoothHealthCallback
, you’ll receive an instance of this object, which
provides configuration information about the available Bluetooth health device, which you must use
to perform various operations such as initiate and terminate connections with the BluetoothHealth
APIs.For more information about using the Bluetooth Health Profile, see the documentation for BluetoothHealth
.
Android Beam is a new NFC feature that allows you to send NDEF messages from one device to another (a process also known as “NDEF Push”). The data transfer is initiated when two Android-powered devices that support Android Beam are in close proximity (about 4 cm), usually with their backs touching. The data inside the NDEF message can contain any data that you wish to share between devices. For example, the People app shares contacts, YouTube shares videos, and Browser shares URLs using Android Beam.
To transmit data between devices using Android Beam, you need to create an NdefMessage
that contains the information you want to share while your activity is in
the foreground. You must then pass the NdefMessage
to the system in one of two
ways:
NdefMessage
to push while in the activity:
Call setNdefPushMessage()
at any time to set
the message you want to send. For instance, you might call this method and pass it your NdefMessage
during your activity’s onCreate()
method. Then, whenever Android Beam is activated with another device while the activity is in the
foreground, the system sends the NdefMessage
to the other device.
NdefMessage
to push at the time that Android Beam is initiated:
Implement NfcAdapter.CreateNdefMessageCallback
, in which your
implementation of the createNdefMessage()
method returns the NdefMessage
you want to send. Then pass the NfcAdapter.CreateNdefMessageCallback
implementation to setNdefPushMessageCallback()
.
In this case, when Android Beam is activated with another device while your activity is in the
foreground, the system calls createNdefMessage()
to retrieve
the NdefMessage
you want to send. This allows you to define the NdefMessage
to deliver only once Android Beam is initiated, in case the contents
of the message might vary throughout the life of the activity.
In case you want to run some specific code once the system has successfully delivered your NDEF
message to the other device, you can implement NfcAdapter.OnNdefPushCompleteCallback
and set it with setNdefPushCompleteCallback()
. The system will
then call onNdefPushComplete()
when the message is delivered.
On the receiving device, the system dispatches NDEF Push messages in a similar way to regular NFC
tags. The system invokes an intent with the ACTION_NDEF_DISCOVERED
action to start an activity, with either a URL or a MIME type set according to the first NdefRecord
in the NdefMessage
. For the activity you want to
respond, you can declare intent filters for the URLs or MIME types your app cares about. For more
information about Tag Dispatch see the NFC developer guide.
If you want your NdefMessage
to carry a URI, you can now use the convenience
method createUri
to construct a new NdefRecord
based on either a string or a Uri
object. If the URI is
a special format that you want your application to also receive during an Android Beam event, you
should create an intent filter for your activity using the same URI scheme in order to receive the
incoming NDEF message.
You should also pass an “Android application record” with your NdefMessage
in
order to guarantee that your application handles the incoming NDEF message, even if other
applications filter for the same intent action. You can create an Android application record by
calling createApplicationRecord()
, passing it
your application’s package name. When the other device receives the NDEF message with the
application record and multiple applications contain activities that handle the specified intent,
the system always delivers the message to the activity in your application (based on the matching
application record). If the target device does not currently have your application installed, the
system uses the Android application record to launch Android Market and take the user to the
application in order to install it.
If your application doesn’t use NFC APIs to perform NDEF Push messaging, then Android provides a default behavior: When your application is in the foreground on one device and Android Beam is invoked with another Android-powered device, then the other device receives an NDEF message with an Android application record that identifies your application. If the receiving device has the application installed, the system launches it; if it’s not installed, Android Market opens and takes the user to your application in order to install it.
For some example code, see the Android Beam Demo sample app.
Android now supports Wi-Fi Direct for peer-to-peer (P2P) connections between Android-powered devices and other device types without a hotspot or Internet connection. The Android framework provides a set of Wi-Fi P2P APIs that allow you to discover and connect to other devices when each device supports Wi-Fi Direct, then communicate over a speedy connection across distances much longer than a Bluetooth connection.
A new package, android.net.wifi.p2p
, contains all the APIs for performing peer-to-peer
connections with Wi-Fi. The primary class you need to work with is WifiP2pManager
, which you can acquire by calling getSystemService(WIFI_P2P_SERVICE)
. The WifiP2pManager
includes APIs that allow you to:
initialize()
discoverPeers()
connect()
Several other interfaces and classes are necessary as well, such as:
WifiP2pManager.ActionListener
interface allows you to receive
callbacks when an operation such as discovering peers or connecting to them succeeds or fails.WifiP2pManager.PeerListListener
interface allows you to receive
information about discovered peers. The callback provides a WifiP2pDeviceList
, from which you can retrieve a WifiP2pDevice
object for each device within range and get information such as
the device name, address, device type, the WPS configurations the device supports, and more.WifiP2pManager.GroupInfoListener
interface allows you to
receive information about a P2P group. The callback provides a WifiP2pGroup
object, which provides group information such as the owner, the
network name, and passphrase.WifiP2pManager.ConnectionInfoListener
interface allows you to
receive information about the current connection. The callback provides a WifiP2pInfo
object, which has information such as whether a group has been
formed and who is the group owner.In order to use the Wi-Fi P2P APIs, your app must request the following user permissions:
ACCESS_WIFI_STATE
CHANGE_WIFI_STATE
INTERNET
(although your app doesn’t technically connect
to the Internet, the WiFi Direct implementation uses sockets that do require Internet
permission to work).The Android system also broadcasts several different actions during certain Wi-Fi P2P events:
WIFI_P2P_CONNECTION_CHANGED_ACTION
: The P2P
connection state has changed. This carries EXTRA_WIFI_P2P_INFO
with a WifiP2pInfo
object and EXTRA_NETWORK_INFO
with a NetworkInfo
object.WIFI_P2P_STATE_CHANGED_ACTION
: The P2P state has
changed between enabled and disabled. It carries EXTRA_WIFI_STATE
with either WIFI_P2P_STATE_DISABLED
or WIFI_P2P_STATE_ENABLED
WIFI_P2P_PEERS_CHANGED_ACTION
: The list of peer
devices has changed.WIFI_P2P_THIS_DEVICE_CHANGED_ACTION
: The details for
this device have changed.See the WifiP2pManager
documentation for more information. Also
look at the Wi-Fi Direct Demo
sample application.
Android 4.0 gives users precise visibility of how much network data their applications are using. The Settings app provides controls that allow users to manage set limits for network data usage and even disable the use of background data for individual apps. In order to avoid users disabling your app’s access to data from the background, you should develop strategies to use use the data connection efficiently and adjust your usage depending on the type of connection available.
If your application performs a lot of network transactions, you should provide user settings that
allow users to control your app’s data habits, such as how often your app syncs data, whether to
perform uploads/downloads only when on Wi-Fi, whether to use data while roaming, etc. With these
controls available to them, users are much less likely to disable your app’s access to data when
they approach their limits, because they can instead precisely control how much data your app uses.
If you provide a preference activity with these settings, you should include in its manifest
declaration an intent filter for the ACTION_MANAGE_NETWORK_USAGE
action. For example:
<activity android:name="DataPreferences" android:label="@string/title_preferences"> <intent-filter> <action android:name="android.intent.action.MANAGE_NETWORK_USAGE" /> <category android:name="android.intent.category.DEFAULT" /> </intent-filter> </activity>
This intent filter indicates to the system that this is the activity that controls your application’s data usage. Thus, when the user inspects how much data your app is using from the Settings app, a “View application settings” button is available that launches your preference activity so the user can refine how much data your app uses.
Also beware that getBackgroundDataSetting()
is now
deprecated and always returns true—use getActiveNetworkInfo()
instead. Before you attempt any network
transactions, you should always call getActiveNetworkInfo()
to get the NetworkInfo
that represents the current network and query isConnected()
to check whether the device has a
connection. You can then check other connection properties, such as whether the device is
roaming or connected to Wi-Fi.
Three major features have been added to RenderScript:
The Allocation
class now supports a USAGE_GRAPHICS_RENDER_TARGET
memory space, which allows you to
render things directly into the Allocation
and use it as a framebuffer
object.
RSTextureView
provides a means to display RenderScript graphics
inside of a View
, unlike RSSurfaceView
, which
creates a separate window. This key difference allows you to do things such as move, transform, or
animate an RSTextureView
as well as draw RenderScript graphics inside
a view that lies within an activity layout.
The Script.forEach()
method allows you to call
RenderScript compute scripts from the VM level and have them automatically delegated to available
cores on the device. You do not use this method directly, but any compute RenderScript that you
write will have a forEach()
method that you can call in
the reflected RenderScript class. You can call the reflected forEach()
method by passing in an input Allocation
to process, an output Allocation
to
write the result to, and a FieldPacker
data structure in case the
RenderScript needs more information. Only one of the Allocation
s is
necessary and the data structure is optional.
Android 4.0 improves accessibility for sight-impaired users with new explore-by-touch mode and extended APIs that allow you to provide more information about view content or develop advanced accessibility services.
Users with vision loss can now explore the screen by touching and dragging a finger across the
screen to hear voice descriptions of the content. Because the explore-by-touch mode works like a
virtual cursor, it allows screen readers to identify the descriptive text the same way that screen
readers can when the user navigates with a d-pad or trackball—by reading information provided
by android:contentDescription
and setContentDescription()
upon a simulated "hover" event. So,
consider this is a reminder that you should provide descriptive text for the views in your
application, especially for ImageButton
, EditText
,
ImageView
and other widgets that might not naturally contain descriptive
text.
To enhance the information available to accessibility services such as screen readers, you can
implement new callback methods for accessibility events in your custom View
components.
It's important to first note that the behavior of the sendAccessibilityEvent()
method has changed in Android
4.0. As with previous version of Android, when the user enables accessibility services on the device
and an input event such as a click or hover occurs, the respective view is notified with a call to
sendAccessibilityEvent()
. Previously, the
implementation of sendAccessibilityEvent()
would
initialize an AccessibilityEvent
and send it to AccessibilityManager
. The new behavior involves some additional callback
methods that allow the view and its parents to add more contextual information to the event:
sendAccessibilityEvent()
and sendAccessibilityEventUnchecked()
methods defer
to onInitializeAccessibilityEvent()
.
Custom implementations of View
might want to implement onInitializeAccessibilityEvent()
to
attach additional accessibility information to the AccessibilityEvent
, but should also call the super implementation to
provide default information such as the standard content description, item index, and more.
However, you should not add additional text content in this callback—that happens
next.
dispatchPopulateAccessibilityEvent()
, which
defers to the onPopulateAccessibilityEvent()
callback.
Custom implementations of View
should usually implement onPopulateAccessibilityEvent()
to add additional
text content to the AccessibilityEvent
if the android:contentDescription
text is missing or
insufficient. To add more text description to the
AccessibilityEvent
, call getText()
.add()
.
View
passes the event up the view hierarchy by calling
requestSendAccessibilityEvent()
on the
parent view. Each parent view then has the chance to augment the accessibility information by
adding an AccessibilityRecord
, until it
ultimately reaches the root view, which sends the event to the AccessibilityManager
with sendAccessibilityEvent()
.In addition to the new methods above, which are useful when extending the View
class, you can also intercept these event callbacks on any View
by extending AccessibilityDelegate
and setting it on the view with
setAccessibilityDelegate()
.
When you do, each accessibility method in the view defers the call to the corresponding method in
the delegate. For example, when the view receives a call to onPopulateAccessibilityEvent()
, it passes it to the
same method in the View.AccessibilityDelegate
. Any methods not handled by
the delegate are given right back to the view for default behavior. This allows you to override only
the methods necessary for any given view without extending the View
class.
If you want to maintain compatibility with Android versions prior to 4.0, while also supporting the new the accessibility APIs, you can do so with the latest version of the v4 support library (in Compatibility Package, r4) using a set of utility classes that provide the new accessibility APIs in a backward-compatible design.
If you're developing an accessibility service, the information about various accessibility events has been significantly expanded to enable more advanced accessibility feedback for users. In particular, events are generated based on view composition, providing better context information and allowing accessibility services to traverse view hierarchies to get additional view information and deal with special cases.
If you're developing an accessibility service (such as a screen reader), you can access additional content information and traverse view hierarchies with the following procedure:
AccessibilityEvent
from an application,
call the AccessibilityEvent.getRecord()
to retrieve a specific AccessibilityRecord
(there may be several records attached to the
event).AccessibilityEvent
or an individual AccessibilityRecord
, you can call getSource()
to retrieve a AccessibilityNodeInfo
object.
An AccessibilityNodeInfo
represents a single node
of the window content in a format that allows you to query accessibility information about that
node. The AccessibilityNodeInfo
object returned from AccessibilityEvent
describes the event source, whereas the source from
an AccessibilityRecord
describes the predecessor of the event
source.
AccessibilityNodeInfo
, you can query information
about it, call getParent()
or getChild()
to traverse the view
hierarchy, and even add child views to the node.In order for your application to publish itself to the system as an accessibility service, it
must declare an XML configuration file that corresponds to AccessibilityServiceInfo
. For more information about creating an
accessibility service, see AccessibilityService
and SERVICE_META_DATA
for information about the XML configuration.
If you're interested in the device's accessibility state, the AccessibilityManager
has some new APIs such as:
AccessibilityManager.AccessibilityStateChangeListener
is an interface that allows you to receive a callback whenever accessibility is enabled or
disabled.getEnabledAccessibilityServiceList()
provides information about which accessibility services
are currently enabled.isTouchExplorationEnabled()
tells
you whether the explore-by-touch mode is enabled.Android 4.0 expands the capabilities for enterprise application with the following features.
The new VpnService
allows applications to build their own VPN (Virtual
Private Network), running as a Service
. A VPN service creates an interface for a
virtual network with its own address and routing rules and performs all reading and writing with a
file descriptor.
To create a VPN service, use VpnService.Builder
, which allows you to specify
the network address, DNS server, network route, and more. When complete, you can establish the
interface by calling establish()
, which returns a ParcelFileDescriptor
.
Because a VPN service can intercept packets, there are security implications. As such, if you
implement VpnService
, then your service must require the BIND_VPN_SERVICE
to ensure that only the system can bind to it (only
the system is granted this permission—apps cannot request it). To then use your VPN service,
users must manually enable it in the system settings.
Applications that manage the device restrictions can now disable the camera using setCameraDisabled()
and the USES_POLICY_DISABLE_CAMERA
property (applied with a <disable-camera />
element in the policy configuration file).
The new KeyChain
class provides APIs that allow you to import and access
certificates in the system key store. Certificates streamline the installation of both client
certificates (to validate the identity of the user) and certificate authority certificates (to
verify server identity). Applications such as web browsers or email clients can access the installed
certificates to authenticate users to servers. See the KeyChain
documentation for more information.
Two new sensor types have been added in Android 4.0:
TYPE_AMBIENT_TEMPERATURE
: A temperature sensor that provides
the ambient (room) temperature in degrees Celsius.TYPE_RELATIVE_HUMIDITY
: A humidity sensor that provides the
relative ambient (room) humidity as a percentage.If a device has both TYPE_AMBIENT_TEMPERATURE
and TYPE_RELATIVE_HUMIDITY
sensors, you can use them to calculate the dew point
and the absolute humidity.
The previous temperature sensor, TYPE_TEMPERATURE
, has been
deprecated. You should use the TYPE_AMBIENT_TEMPERATURE
sensor
instead.
Additionally, Android’s three synthetic sensors have been improved so they now have lower latency
and smoother output. These sensors include the gravity sensor (TYPE_GRAVITY
), rotation vector sensor (TYPE_ROTATION_VECTOR
), and linear acceleration sensor (TYPE_LINEAR_ACCELERATION
). The improved sensors rely on the gyroscope
sensor to improve their output, so the sensors appear only on devices that have a gyroscope.
Android’s text-to-speech (TTS) APIs have been significantly extended to allow applications to more easily implement custom TTS engines, while applications that want to use a TTS engine have a couple new APIs for selecting an engine.
In previous versions of Android, you could use the TextToSpeech
class
to perform text-to-speech (TTS) operations using the TTS engine provided by the system or set a
custom engine using setEngineByPackageName()
. In Android 4.0, the setEngineByPackageName()
method has been
deprecated and you can now specify the engine to use with a new TextToSpeech
constructor that accepts the package name of a TTS engine.
You can also query the available TTS engines with getEngines()
. This method returns a list of TextToSpeech.EngineInfo
objects, which include meta data such as the engine’s
icon, label, and package name.
Previously, custom engines required that the engine be built using an undocumented native header file. In Android 4.0, there is a complete set of framework APIs for building TTS engines.
The basic setup requires an implementation of TextToSpeechService
that
responds to the INTENT_ACTION_TTS_SERVICE
intent. The
primary work for a TTS engine happens during the onSynthesizeText()
callback in a service
that extends TextToSpeechService
. The system delivers this method two
objects:
SynthesisRequest
: This contains various data including the text to
synthesize, the locale, the speech rate, and voice pitch.SynthesisCallback
: This is the interface by which your TTS engine
delivers the resulting speech data as streaming audio. First the engine must call start()
to indicate that the engine is ready to deliver
the audio, then call audioAvailable()
,
passing it the audio data in a byte buffer. Once your engine has passed all audio through the
buffer, call done()
.Now that the framework supports a true API for creating TTS engines, support for the native code implementation has been removed. Look for a blog post about a compatibility layer that you can use to convert your old TTS engines to the new framework.
For an example TTS engine using the new APIs, see the Text To Speech Engine sample app.
A new spell checker framework allows apps to create spell checkers in a manner similar to the
input method framework. To create a new spell checker, you must implement a service that extends
SpellCheckerService
and extend the SpellCheckerService.Session
class to provide spelling suggestions based
on text provided by interface callback methods. In the SpellCheckerService.Session
callback methods, you must return the
spelling suggestions as SuggestionsInfo
objects.
Applications with a spell checker service must declare the BIND_TEXT_SERVICE
permission as required by the service, such that
other services must have this permission in order for them to bind with the spell checker service.
The service must also declare an intent filter with <action
android:name="android.service.textservice.SpellCheckerService" />
as the intent’s action and should
include a <meta-data>
element that declares configuration information for the spell
checker.
The ActionBar
has been updated to support several new behaviors. Most
importantly, the system gracefully manages the action bar’s size and configuration when running on
smaller screens in order to provide an optimal user experience on all screen sizes. For example,
when the screen is narrow (such as when a handset is in portrait orientation), the action bar’s
navigation tabs appear in a “stacked bar,” which appears directly below the main action bar. You can
also opt-in to a “split action bar,” which places all action items in a separate bar at the bottom
of the screen when the screen is narrow.
If your action bar includes several action items, not all of them will fit into the action bar on
a narrow screen, so the system will place more of them into the overflow menu. However, Android 4.0
allows you to enable “split action bar” so that more action items can appear on the screen in a
separate bar at the bottom of the screen. To enable split action bar, add android:uiOptions
with ”splitActionBarWhenNarrow”
to either your
<application>
tag or
individual <activity>
tags
in your manifest file. When enabled, the system will add an additional bar at the bottom of the
screen for all action items when the screen is narrow (no action items will appear in the primary
action bar).
If you want to use the navigation tabs provided by the ActionBar.Tab
APIs,
but don’t need the main action bar on top (you want only the tabs to appear at the top), then enable
the split action bar as described above and also call setDisplayShowHomeEnabled(false)
to disable the
application icon in the action bar. With nothing left in the main action bar, it
disappears—all that’s left are the navigation tabs at the top and the action items at the
bottom of the screen.
If you want to apply custom styling to the action bar, you can use new style properties backgroundStacked
and backgroundSplit
to apply a background
drawable or color to the stacked bar and split bar, respectively. You can also set these styles at
runtime with setStackedBackgroundDrawable()
and setSplitBackgroundDrawable()
.
The new ActionProvider
class allows you to create a specialized handler for
action items. An action provider can define an action view, a default action behavior, and a submenu
for each action item to which it is associated. When you want to create an action item that has
dynamic behaviors (such as a variable action view, default action, or submenu), extending ActionProvider
is a good solution in order to create a reusable component, rather than
handling the various action item transformations in your fragment or activity.
For example, the ShareActionProvider
is an extension of ActionProvider
that facilitates a “share” action from the action bar. Instead of using
traditional action item that invokes the ACTION_SEND
intent, you can
use this action provider to present an action view with a drop-down list of applications that handle
the ACTION_SEND
intent. When the user selects an application to use
for the action, ShareActionProvider
remembers that selection and provides it
in the action view for faster access to sharing with that app.
To declare an action provider for an action item, include the android:actionProviderClass
attribute in the <item>
element for your activity’s options menu, with the class name of the action
provider as the value. For example:
<item android:id="@+id/menu_share" android:title="Share" android:showAsAction="ifRoom" android:actionProviderClass="android.widget.ShareActionProvider" />
In your activity’s onCreateOptionsMenu()
callback method, retrieve an instance of the action provider from the menu item and set the
intent:
public boolean onCreateOptionsMenu(Menu menu) { getMenuInflater().inflate(R.menu.options, menu); ShareActionProvider shareActionProvider = (ShareActionProvider) menu.findItem(R.id.menu_share).getActionProvider(); // Set the share intent of the share action provider. shareActionProvider.setShareIntent(createShareIntent()); ... return super.onCreateOptionsMenu(menu); }
For an example using the ShareActionProvider
, see the ActionBarActionProviderActivity
class in ApiDemos.
Action items that provide an action view can now toggle between their action view state and
traditional action item state. Previously only the SearchView
supported
collapsing when used as an action view, but now you can add an action view for any action item and
switch between the expanded state (action view is visible) and collapsed state (action item is
visible).
To declare that an action item that contains an action view be collapsible, include the “collapseActionView”
flag in the android:showAsAction
attribute for the <item>
element in the menu’s XML file.
To receive callbacks when an action view switches between expanded and collapsed, register an
instance of MenuItem.OnActionExpandListener
with the respective MenuItem
by calling setOnActionExpandListener()
. Typically, you should do so during the onCreateOptionsMenu()
callback.
To control a collapsible action view, you can call collapseActionView()
and expandActionView()
on
the respective MenuItem
.
When creating a custom action view, you can also implement the new CollapsibleActionView
interface to receive callbacks when the view is expanded and
collapsed.
setHomeButtonEnabled()
allows you to specify
whether the icon/logo behaves as a button to navigate home or “up” (pass “true” to make it behave as
a button).setIcon()
and setLogo()
allow you to define the action bar icon or logo at runtime.Fragment.setMenuVisibility()
allows you to enable
or disable the visibility of the options menu items declared by the fragment. This is useful if the
fragment has been added to the activity, but is not visible, so the menu items should be
hidden.FragmentManager.invalidateOptionsMenu()
allows you to invalidate the activity options menu during various states of the fragment lifecycle
in which using the equivalent method from Activity
might not be available.Android 4.0 introduces a variety of new views and other UI components.
Since the early days of Android, the system has managed a UI component known as the status bar, which resides at the top of handset devices to deliver information such as the carrier signal, time, notifications, and so on. Android 3.0 added the system bar for tablet devices, which resides at the bottom of the screen to provide system navigation controls (Home, Back, and so forth) and also an interface for elements traditionally provided by the status bar. In Android 4.0, the system provides a new type of system UI called the navigation bar. The navigation bar shares some qualities with the system bar, because it provides navigation controls for devices that don’t have hardware counterparts for navigating the system, but the navigation controls is all that the navigation bar offers (a device with the navigation bar, thus, also includes the status bar at the top of the screen).
To this day, you can hide the status bar on handsets using the FLAG_FULLSCREEN
flag. In Android 4.0, the APIs that control
the system bar’s visibility have been updated to better reflect the behavior of both the system bar
and navigation bar:
SYSTEM_UI_FLAG_LOW_PROFILE
flag replaces View.STATUS_BAR_HIDDEN
flag. When set, this flag enables “low profile” mode for the system bar or
navigation bar. Navigation buttons dim and other elements in the system bar also hide.SYSTEM_UI_FLAG_VISIBLE
flag replaces the STATUS_BAR_VISIBLE
flag to request the system bar or navigation bar be visible.SYSTEM_UI_FLAG_HIDE_NAVIGATION
is a new flag that requests that
the navigation bar hide completely. Take note that this works only for the navigation bar
used by some handsets (it does not hide the system bar on tablets). The navigation
bar returns as soon as the system receives user input. As such, this mode is generally used for
video playback or other cases in which the whole screen is needed but user input is not
required.You can set each of these flags for the system bar and navigation bar by calling setSystemUiVisibility()
on any view in your activity. The
window manager will combine (OR-together) all flags from all views in your window and
apply them to the system UI as long as your window has input focus. When your window loses input
focus (the user navigates away from your app, or a dialog appears), your flags cease to have effect.
Similarly, if you remove those views from the view hierarchy their flags no longer apply.
To synchronize other events in your activity with visibility changes to the system UI (for
example, hide the action bar or other UI controls when the system UI hides), you should register a
View.OnSystemUiVisibilityChangeListener
to be notified when the visibility
of the system bar or navigation bar changes.
See the OverscanActivity class for a demonstration of different system UI options.
GridLayout
is a new view group that places child views in a rectangular
grid. Unlike TableLayout
, GridLayout
relies on a flat
hierarchy and does not make use of intermediate views such as table rows for providing structure.
Instead, children specify which row(s) and column(s) they should occupy (cells can span multiple
rows and/or columns), and by default are laid out sequentially across the grid’s rows and columns.
The GridLayout
orientation determines whether sequential children are by
default laid out horizontally or vertically. Space between children may be specified either by using
instances of the new Space
view or by setting the relevant margin parameters
on children.
See ApiDemos
for samples using GridLayout
.
TextureView
is a new view that allows you to display a content stream, such
as a video or an OpenGL scene. Although similar to SurfaceView
, TextureView
is unique in that it behaves like a regular view, rather than creating a
separate window, so you can treat it like any other View
object. For example,
you can apply transforms, animate it using ViewPropertyAnimator
, or
adjust its opacity with setAlpha()
.
Beware that TextureView
works only within a hardware accelerated window.
For more information, see the TextureView
documentation.
The new Switch
widget is a two-state toggle that users can drag to one
side or the other (or simply tap) to toggle an option between two states.
You can use the android:textOn
and android:textOff
attributes to specify the text
to appear on the switch when in the on and off setting. The android:text
attribute also
allows you to place a label alongside the switch.
For a sample using switches, see the switches.xml layout file and respective Switches activity.
Android 3.0 introduced PopupMenu
to create short contextual menus that pop
up at an anchor point you specify (usually at the point of the item selected). Android 4.0 extends
the PopupMenu
with a couple useful features:
inflate()
, passing it the menu resource ID.PopupMenu.OnDismissListener
that receives a
callback when the menu is dismissed.A new TwoStatePreference
abstract class serves as the basis for
preferences that provide a two-state selection option. The new SwitchPreference
is an extension of TwoStatePreference
that provides a Switch
widget in the
preference view to allow users to toggle a setting on or off without the need to open an additional
preference screen or dialog. For example, the Settings application uses a SwitchPreference
for the Wi-Fi and Bluetooth settings.
The View
class now supports “hover” events to enable richer interactions
through the use of pointer devices (such as a mouse or other devices that drive an on-screen
cursor).
To receive hover events on a view, implement the View.OnHoverListener
and
register it with setOnHoverListener()
. When a hover
event occurs on the view, your listener receives a call to onHover()
, providing the View
that
received the event and a MotionEvent
that describes the type of hover event
that occurred. The hover event can be one of the following:
Your View.OnHoverListener
should return true from onHover()
if it handles the hover event. If your
listener returns false, then the hover event will be dispatched to the parent view as usual.
If your application uses buttons or other widgets that change their appearance based on the
current state, you can now use the android:state_hovered
attribute in a state list drawable to
provide a different background drawable when a cursor hovers over the view.
For a demonstration of the new hover events, see the Hover class in ApiDemos.
Android now provides APIs for receiving input from a stylus input device such as a digitizer tablet peripheral or a stylus-enabled touch screen.
Stylus input operates in a similar manner to touch or mouse input. When the stylus is in contact with the digitizer, applications receive touch events just like they would when a finger is used to touch the display. When the stylus is hovering above the digitizer, applications receive hover events just like they would when a mouse pointer was being moved across the display when no buttons are pressed.
Your application can distinguish between finger, mouse, stylus and eraser input by querying the
“tool type” associated with each pointer in a MotionEvent
using getToolType()
. The currently defined tool types are: TOOL_TYPE_UNKNOWN
, TOOL_TYPE_FINGER
,
TOOL_TYPE_MOUSE
, TOOL_TYPE_STYLUS
,
and TOOL_TYPE_ERASER
. By querying the tool type, your application
can choose to handle stylus input in different ways from finger or mouse input.
Your application can also query which mouse or stylus buttons are pressed by querying the “button
state” of a MotionEvent
using getButtonState()
. The currently defined button states are: BUTTON_PRIMARY
, BUTTON_SECONDARY
, BUTTON_TERTIARY
, BUTTON_BACK
, and BUTTON_FORWARD
. For convenience, the back and forward mouse buttons are
automatically mapped to the KEYCODE_BACK
and KEYCODE_FORWARD
keys. Your application can handle these keys to support
mouse button based back and forward navigation.
In addition to precisely measuring the position and pressure of a contact, some stylus input
devices also report the distance between the stylus tip and the digitizer, the stylus tilt angle,
and the stylus orientation angle. Your application can query this information using getAxisValue()
with the axis codes AXIS_DISTANCE
, AXIS_TILT
, and AXIS_ORIENTATION
.
For a demonstration of tool types, button states and the new axis codes, see the TouchPaint class in ApiDemos.
The new Property
class provides a fast, efficient, and easy way to specify a
property on any object that allows callers to generically set/get values on target objects. It also
allows the functionality of passing around field/method references and allows code to set/get values
of the property without knowing the details of what the fields/methods are.
For example, if you want to set the value of field bar
on object foo
, you would
previously do this:
foo.bar = value;
If you want to call the setter for an underlying private field bar
, you would previously
do this:
foo.setBar(value);
However, if you want to pass around the foo
instance and have some other code set the
bar
value, there is really no way to do it prior to Android 4.0.
Using the Property
class, you can declare a Property
object BAR
on class Foo
so that you can set the field on instance foo
of
class Foo
like this:
BAR.set(foo, value);
The View
class now leverages the Property
class to
allow you to set various fields, such as transform properties that were added in Android 3.0 (ROTATION
, ROTATION_X
, TRANSLATION_X
, etc.).
The ObjectAnimator
class also uses the Property
class, so you can create an ObjectAnimator
with a Property
, which is faster, more efficient, and more type-safe than the string-based
approach.
Beginning with Android 4.0, hardware acceleration for all windows is enabled by default if your
application has set either targetSdkVersion
or
minSdkVersion
to
“14”
or higher. Hardware acceleration generally results in smoother animations, smoother
scrolling, and overall better performance and response to user interaction.
If necessary, you can manually disable hardware acceleration with the hardwareAccelerated
attribute for individual <activity>
elements or the <application>
element. You can alternatively disable hardware acceleration for individual views by calling setLayerType(LAYER_TYPE_SOFTWARE)
.
For more information about hardware acceleration, including a list of unsupported drawing operations, see the Hardware Acceleration document.
In previous versions of Android, JNI local references weren’t indirect handles; Android used direct pointers. This wasn't a problem as long as the garbage collector didn't move objects, but it seemed to work because it made it possible to write buggy code. In Android 4.0, the system now uses indirect references in order to detect these bugs.
The ins and outs of JNI local references are described in “Local and Global References” in JNI Tips. In Android 4.0, CheckJNI has been enhanced to detect these errors. Watch the Android Developers Blog for an upcoming post about common errors with JNI references and how you can fix them.
This change in the JNI implementation only affects apps that target Android 4.0 by setting either
the targetSdkVersion
or minSdkVersion
to “14”
or higher. If you’ve set these attributes to any lower value,
then JNI local references behave the same as in previous versions.
WebView
and the built-in BrowserWebView
and the
built-in BrowserWebView
The Browser application adds the following features to support web applications:
The following are new permissions:
ADD_VOICEMAIL
: Allows a voicemail service to add voicemail
messages to the device.BIND_TEXT_SERVICE
: A service that implements SpellCheckerService
must require this permission for itself.BIND_VPN_SERVICE
: A service that implements VpnService
must require this permission for itself.READ_PROFILE
: Provides read access to the ContactsContract.Profile
provider.WRITE_PROFILE
: Provides write access to the ContactsContract.Profile
provider.The following are new device features:
FEATURE_WIFI_DIRECT
: Declares that the application
uses
Wi-Fi for peer-to-peer communications.In addition to everything above, Android 4.0 naturally supports all APIs from previous releases. Because the Android 3.x (Honeycomb) platform is available only for large-screen devices, if you've been developing primarily for handsets, then you might not be aware of all the APIs added to Android in these recent releases.
Here's a look at some of the most notable APIs you might have missed that are now available on handsets as well:
Fragment
: A framework component that allows you to separate distinct
elements of an activity into self-contained modules that define their own UI and lifecycle. See the
Fragments developer guide.ActionBar
: A replacement for the traditional title bar at the top of
the activity window. It includes the application logo in the left corner and provides a new
interface for menu items. See the
Action Bar developer guide.Loader
: A framework component that facilitates asynchronour
loading of data in combination with UI components to dynamically load data without blocking the
main thread. See the
Loaders developer guide.<application>
element or for individual <activity>
elements. This results
in smoother animations, smoother scrolling, and overall better performance and response to user
interaction.
Note: If you set your application's minSdkVersion
or targetSdkVersion
to
"14"
or higher, hardware acceleration is enabled by default.
android.mtp
documentation.android.net.rtp
documentation.<uses-feature>
to
declare landscape or portrait screen orientation requirements."screenSize"
configuration change if you also want to handle the "orientation"
configuration change. See
android:configChanges
for more information.For a detailed view of all API changes in Android 4.0 (API Level 14), see the API Differences Report.
The Android 4.0 API is assigned an integer identifier—14—that is stored in the system itself. This identifier, called the "API level", allows the system to correctly determine whether an application is compatible with the system, prior to installing the application.
To use APIs introduced in Android 4.0 in your application, you need compile the
application against an Android platform that supports API level 14 or
higher. Depending on your needs, you might also need to add an
android:minSdkVersion="14"
attribute to the
<uses-sdk>
element.
For more information, see the API Levels document.
The system image included in the downloadable platform provides these built-in applications:
|
|
The system image included in the downloadable SDK platform provides a variety of built-in locales. In some cases, region-specific strings are available for the locales. In other cases, a default version of the language is used. The languages that are available in the Android 3.0 system image are listed below (with language_country/region locale descriptor).
|
|
Note: The Android platform may support more locales than are included in the SDK system image. All of the supported locales are available in the Android Open Source Project.
The downloadable platform includes the following emulator skins:
To test your application on an emulator that represents the latest Android device, you can create an AVD with the new WXGA720 skin (it's an xhdpi, normal screen device). Note that the emulator currently doesn't support the new on-screen navigation bar for devices without hardware navigation buttons, so when using this skin, you must use keyboard keys Home for the Home button, ESC for the Back button, and F2 or Page-up for the Menu button.
However, due to performance issues in the emulator when running high-resolution screens such as the one for the WXGA720 skin, we recommend that you primarily use the traditional WVGA800 skin (hdpi, normal screen) to test your application.