Pages

Ads 468x60px

Thursday, September 30, 2010

More Countries, More sellers, More buyers

[This post is by Eric Chu, Android Developer Ecosystem. — Tim Bray]

Since we launched Android and Android Market, we have seen the population of Android users and devices expand into many countries. This widespread adoption has brought with it growing interest in Android Market’s support for the buying and selling of paid applications in these additional countries.

We have been hard at work on this and it is my pleasure to announce that effective today, developers from 20 more countries can now sell paid apps on Android Market. Additionally, over the next 2 weeks, users in 18 additional countries will be able to purchase paid apps from Android Market.

Support for paid application sales is now expanded to developers in 29 countries, with today’s additions of Argentina, Australia, Belgium, Brazil, Canada, Denmark, Finland, Hong Kong, Ireland, Israel, Mexico, New Zealand, Norway, Portugal, Russia, Singapore, South Korea, Sweden, Switzerland and Taiwan.

In addition, Android Market users from 32 countries will be able to buy apps, with the addition of Argentina, Belgium, Brazil, Czech Republic, Denmark, Finland, Hong Kong, India, Ireland, Israel, Mexico, Norway, Poland, Portugal, Russia, Singapore, Sweden, and Taiwan. No action is necessary if you have targeted your paid apps to be available to “All Locations” and would like to launch in these additional countries. If you have not selected “All Locations” and would like to target these additional countries, or if you have selected “All Locations” and do not want to launch your apps in these additional buyer countries, please visit the Android Market publisher site regularly over the next two weeks to make the necessary adjustments as the new buyer countries launch.

We remain committed to continuing to improve the buyer and seller experiences on Android Market. Among other initiatives, we look forward to bringing the Android Market paid apps ecosystem to even more countries in the coming months. Please stay tuned.

Wednesday, September 29, 2010

Articulated Naturality (AN) vs Augmented Reality (AR) -the new frontier of Mobile 2.0


QPC - Articulated Naturality Web from Justin Montgomery on Vimeo.

Special thanks to the great team at The Where Business who alerted me to this video about the fresh new notion of Articulated Naturality or AN. 

AN is set to take Augmented Reality (AR) to a whole new level. With an AN app, you could point at a hotel building, zoom in on a particular room, and find out whether it is available for that night or not. The company pitching AN widely is QPC, who particpated in the recent Summer Davos Conference.

Check out the video, where Matt Trubow, QPC CIO, explains why AN is set to give a 'flat' AR a much richer three dimensional element or a 'virtual universe'...inspriring stuff...

Tuesday, September 28, 2010

Reflections on G-Kenya

[This post is by Reto Meier AKA @retomeier, who wrote the book on Android App development. — Tim Bray]

Recently I visited Kenya for the three-day G-Kenya event. I was there for two reasons:

  • To talk about Android and the emerging mobile opportunities for African developers.

  • To ask questions and find out more about the reality of mobiles and writing code from the people there.

Of the countries I’ve visited to talk about Android, nowhere have people had such a close connection to their mobile phones as in Africa. While most Kenyans own feature phones, those mobiles are already used as much more than simple phones. Mobile payments are already common, and cheap data plans mean that many people access the Internet exclusively through mobile handsets.

There were two Android announcements while I was in town: a new low-cost Android handset (the Huawei U8220), and Android Market access for Kenyans. I can’t wait to see the kind of apps that come from developers who live in an environment where mobile is so pervasive.

Day 1: Students

G-Kenya was set within the beautiful campus of the Strathmore Business School, so it was fitting that day one was addressed to students.

Of the three groups, the students where the most enthusiastic about Android. This was likely influenced by their confidence that by the time they graduate, modern smartphones in Africa will have become the norm.

I love talking to student developers — without the commercial pressures of finding customers or a monetization model — they're free to innovate on whatever technology platforms they think are interesting.

Day 2: Developers

Modern smartphones are not yet prevalent in Africa, so it wasn’t surprising that many of the developers are currently focusing on feature phones. That said, it was generally acknowledged that it was a question of when rather than if smartphones would come to dominate. The trick will be picking the right time to invest in Android so that they're ready to take advantage.

Plenty of developers believe that time is right now. It was a pleasure to meet the guys behind Ushahidi, creators of an Android app created to report and record incidents during the 2008 election violence. Since their launch they’ve expanded to offer a global platform for crowd-sourced news where timeliness is critical.

I love opportunity the Android Market delivers to developers like the idea of developers like Ushahidi and Little Fluffy Toys (of London Cycle Hire fame). An app the solves a problem for your local community can easily be expanded to offer solutions to similar problems across the world.

Developer focus in Kenya seemed to follow similar lines:

  • Create products and services targeted at local communities (such as the developers creating a distributed system to help health-care workers record medical information in the field.)

  • Build robust cloud-based services that provide access to users from any mobile platform.

  • Expand from feature phones to Android to incorporate features like GPS positioning, maps, and recording video and audio.

Day 3: Entrepreneurs and Marketers

No one was surprised to see a lot of the developers from the previous day return for entrepreneur day, and the apparent lack of Android questions from Day 2 was more than made up for on day 3; the “AppEngine Challenge” on Day 2 fielded a record 30 entries, so it seems everyone was working on their entries rather than asking questions!

I didn’t speak on Day 3, but spent all day fielding questions from eager mobile developers hoping to catch the Android wave as early innovators and first movers. That included a team who were working to provide real-time public transit tracking of Matatu via GPS and Android devices.

Reflections

It’s an exciting time to be a developer in Kenya. I regularly asked developers how long they thought it would take for Android devices to become common place. Many suggested if I came back this time next year I'd see a flood of Android devices. Even the more pessimistic predicted no more than 3 years.

As I traveled back towards Jomo Kenyatta International, listening to the radio offering a free Sony Ericsson X10 Mini to one lucky caller, the future didn’t seem very far away.

The Keys to Great App Discoverability-App Store Tricks for Mobile 2.0 (PART II)


Getting your app to be discovered amongst a galaxy of options out there in the mobile 'appspace' is rapidly becoming the key to success or failure for new services (as well as for existing brands).

One of the key decisions developers, entrepreneurs and companies alike need to make is whether their app should be free or whether it should be premium.

There are good reasons to go down either path. Consumers love free apps, so if you are looking principally for volume of downloads, it is the way to go.However, there is no such thing as a free lunch -someone, somewhere down the line has to pay. Step in 'freemium models', allowing 'free-sounding' apps to actually generate revenues at some point.

App Store Discoverability rests on four cardinal points:

1. App Reviews
2. App Rankings
3. App Analytics
4. App Discoverability Services

You can find more detail as well as some great examples in Chapter 11 of my book.


The key message is that you can significantly increase the odds of becoming a popular app by understanding the dynamics of how App Stores work. To avoid getting lost in this murky world of App Stores, developers should never lose sight of their prime objective: to get the wider public to discover, examine and download your app. This means experimenting with different approaches, and tracking the impact of these different tactics on app downloads over a period of time. 

Monday, September 27, 2010

The Keys to Great App Discoverability-App Store Tricks for Mobile 2.0 (PART I)

I will be presenting some ideas from my book, Location Aware Applications, next week in Malta, which made me think about an issue that keeps popping up time and again in mobile development: how to get a great app downloaded by thousands of users.

With an 'app universe' of over 300,000 applications, over 8 mobile development platforms and over 50 app stores to choose from, it is has never been this tough to pick the right distribution option(s). And if you zoom into individual App Stores, the level of detail becomes even more  mind-boggling: the iTunes Store has over 200,000 apps available in more than 30 individual country stores, with over 20 different App Categories!

First, a disclaimer: there is no one method that guarantees a successful distribution and discovery of your app.

However, there are some tips and good practice that you can follow to boost your chances of succeeding. 

The first step to get you on the right track is to answer five fundamental questions:


Tuesday, September 21, 2010

Proguard, Android, and the Licensing Server

[This post is by Dan Galpin, an Android Developer Advocate specializing in games and comics. — Tim Bray]

The Securing Android LVL Applications blog post makes it clear that an Android developer should use an obfuscation tool such as Proguard in order to help safeguard their applications when using License Server. Of course, this does present another question. How should one integrate such a tool with the Android build process? We’re specifically going to detail integrating Proguard in this post.

Before you Begin

You must be running the latest version of the Android SDK Tools (at least v7). The new Ant build rules file included with v7 contains hooks to support user-created pre and post compile steps in order to make it easier to integrate tools such as Proguard into an Android build. It also integrates a single rules file for building against all versions of the Android SDK.

Adding an Optimization Step to build.xml

First, you’ll have to get Proguard if you don’t yet have it.

If you’ve been using Eclipse to do your development, you’ll have to switch to using the command line. Android builds are done using Apache Ant. A version of Ant ships along with Eclipse, but I recommend installing your own version.

The Android SDK can build you a starter build.xml file. Here is how it’s done:

android update project --path ./MyAndroidAppProject

If all works well, you’ll have a shiny new build.xml file sitting in your path. Let’s try doing a build.

ant release

You should end up with an unsigned release build. The command-line tools can also sign your build for you. You’ll notice that the android tool created a local.properties file in your directory. It will contain the sdk.dir property. You can have it make you a signed build by adding the location of your keystore and alias to this file.

key.store=/Path/to/my/keystore/MyKeystore.ks
key.alias=myalias

So, now you have a signed build from the command line, but still no obfuscated build. To make things easy, you’re going to want to get two helper files: add-proguard-release.xml and procfg.txt.

Copy these files into your root directory (where the build.xml file sits). To add Proguard to your build, you first need to edit your local properties file to add the location of the directory that Proguard is installed in:

proguard.dir=/Directory/Proguard/Is/Installed/In

Finally... you need to add our script to your build file and have it override a few targets. To do this, we use the XML “entity” construct. At the top of your build.xml file, add an entity that references our script file:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE project [
<!ENTITY add-proguard-release SYSTEM "add-proguard-release.xml">
]>

You’re not done yet. Somewhere within the project tag add the reference to our entity to include our script.

<project name="MyProjectName" default="help">
&add-proguard-release;

That’s it! In many cases, calling

ant release

Will give you an obfuscated build. Now test and make sure that it hasn’t broken anything.

But Wait, My App is Crashing Now

Most crashes happen because Proguard has obfuscated away something that your application needs, such as a class that is referenced in the AndroidManifest or within a layout, or perhaps something called from JNI or reflection. The Proguard configuration provided here tries to avoid obfuscating most of these cases, but it’s still possible that in edge cases you’ll end up seeing something like a ClassNotFoundException.

You can make edits to the procfg.txt file to keep classes that have been obfuscated away. Adding:

-keep public class * [my classname]

should help. For more information about how to prevent Proguard from obfuscating specific things, see the Proguard manual. Specifically, the keep section. In the interest of security, try to keep as little of your application unobfuscated as possible.

The standard settings provided in procfg.txt will be good for many applications, and will catch many common cases, but they are by no means comprehensive. One of the things that we’ve done is had Proguard create a bunch of output files in the obf directory to help you debug these problems.

The mapping.txt file explains how your classes have been obfuscated. You’ll want to make sure to keep this around once you have submitted your build to Market, as you’ll need this to decipher your stack traces.

Conclusion

Tools such as Proguard make the binary of your application harder to understand, and make your application slightly smaller and more efficient at the same time, at the cost of making it slightly more challenging to debug problems in the field. For many applications, the tradeoff is more than worthwhile.

The iPad Tablet Revolution-Three Reasons that Explain Why it is the Future


These days, it is very difficult to filter real news from the substantial amount of 'digital media noise' created on the web. This is even more the case, when it comes to new gadgets and technology. And no gadget caused  more noise and expectation than the iPad, when it was launched to a global fanfare in April this year.

I will not review the iPad's hardware and technical capabilities-enough has been said already and you can easily read this up elsewhere. I just want to make one point that is getting lost in the media hype -the iPad is the most revolutionary device type that has been rolled out this millenium.It is not a fad, it is not a toy and most definitely not merely a giant-sized iPhone.

It is the exquisite execution of the tablet computing concept that others before, including Microsoft, tried and failed.But let me say this again, the iPad is a revolutionary device. It is set to change mobile media forever. Here are three reasons why:

1. The iPad introduces a totally new way to consume mobile media, especially newspapers and magazines.Its screen size achieves a happy medium between readibility and portability. Again, something that is easily overlooked by reviewers is its landscape mode. Turn the iPad on its side from its vertical position and it automatically switches to landscape reading mode.This makes it ideal for newspaper and magazine browsing (Kindle take note).It also makes using those wonderful apps an even better experience.

2.The iPad is the first device of its size and weight to truly enable computing to be carried out. You can actually create and work on a Powerpoint presentation on an iPad. Yes, processing power needs to be upgraded, but then Apple is not perfect.

3. The iPad is designed to be a connected device. With 3G capability designed into the device (unlike a netbook which relies on a dongle or WiFi), data exchange and sharing comes as standard. Forget the WiFi-only version of the iPad, I am sure it will be phased out in due course.(Significantly, the iPad was conceived before the iPhone was, but put on hold to prioritise launching the smaller 3G iPhone device).

The iPad is an inspirational product, like many Apple creations before it. But don't take my word for it. You can check out many videos of how it was used as different musical instruments to create compositions. You'll find one in the title link. Enjoy!

Tuesday, September 14, 2010

Supporting the new music Voice Action

[This post is by Mike LeBeau, the Tech Lead and architect behind Voice Actions. — Tim Bray]

We recently launched Voice Actions in the new Google Voice Search for Android — an awesome new way to search, control, and communicate on your phone faster than ever before, by using your voice.

One of these new Voice Actions lets users find and automatically play music. By speaking something like “listen to They Might Be Giants” into the new Voice Search, users can quickly find the music they want online and play it, using any number of different apps. (Pandora, Last.fm, Spotify, mSpot, and Rdio are among the first apps to support this.)

To do this, we leveraged a very common little piece of Android magic: a new Intent. If you develop a music app that supports open-ended music search, you can make it work with users speaking “listen to” Voice Actions simply by registering for the new intent we’ve defined. This new intent isn’t defined as a constant in the SDK yet, but we wanted to make sure music app developers had all the information needed to use it right away.

Here’s all you should need to know:

  • In your AndroidManifest.xml, just register one of your activities for the new intent android.media.action.MEDIA_PLAY_FROM_SEARCH:

    <application android:label="@string/app_name" android:icon="@drawable/icon">
    <activity android:name="MusicActivity" android:label="@string/app_name">
    <intent-filter>
    <action android:name="android.media.action.MEDIA_PLAY_FROM_SEARCH" />
    <category android:name="android.intent.category.DEFAULT" />
    </intent-filter>
    </activity>
    </application>
  • When your activity receives this intent, you can find the user’s search query inside the SearchManager.QUERY string extra:

    import android.app.Activity;
    import android.app.SearchManager;

    public class MusicActivity extends Activity {
    @Override
    public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    String query = getIntent().getStringExtra(SearchManager.QUERY);
    // Do something with query...
    }
    }

    This will represent everything the user spoke after “listen to”. This is totally open-ended voice recognition, and it expects very flexible search — so, for example, the string could be the name of any artist (“they might be giants”), an album (“factory showroom”), a song (“metal detector”), or a combination of any of these (“metal detector by they might be giants”).

A few subtle details worth understanding about this intent:

  • Your app should do its best to quickly find and automatically play music corresponding to the user’s search query. The intention here is to get users to their desired result as fast as possible, and in this case, that means playing music quickly.

  • This will really only work well for music apps that can find music across a very large corpus of options. Because our voice recognition doesn’t currently support any way to provide a list of specific songs to be recognized, trying to use it against a small set of music choices will work poorly — things which are not in the set will be over-recognized, and things which are in the set may not be recognized well. So if you’re not the developer of a large-scale cloud music application, this intent is probably not for you.

We think you’ll find this new intent can greatly enhance your music app’s experience for users. And we hope you enjoy our new Voice Actions as much as we do!



Monday, September 13, 2010

Screen Geometry Fun

The recent announcement of the Samsung Galaxy Tab should be a wake-up call for Android developers. What’s scary is that we’ve never seen a screen like this on an Android device before. What’s reassuring is that most apps Just Work (in fact, a lot of the ones I’ve tried so far have looked terrific) and the potential problems are easy to avoid. Here’s what you need to do to take advantage of not just the Tab, but all the new form factors that are coming down the pipe.

Let’s consider the Tab as a “teachable moment”:

  • Its screen is 1024x600; no compatible device’s screen has ever had a thousand pixels in any dimension before.

  • A lot of people are going to want to hold it sideways, in “landscape” mode, most of the time.

We recommend spending quality time with the Developers’-guide discussion of supporting multiple screens; we'll be revising that regularly when required as the device landscape changes. Also, this blog recently ran Dan Morrill’s One Screen Turn Deserves Another, which should help out in handling the landscape default.

What density means

When you build your app, you can provide layouts and assets (graphics) which vary by screen density, screen size, and landscape or portrait orientation. Clearly, pulling these together is not as much fun as designing groovy layouts and clever Intent filters; but there’s no way around it.

In this context, the Samsung has another little surprise: If you do the arithmetic, its screen has 170 DPI, which is far from the densest among Android devices. Still, it declares itself as “hdpi” (and as having a “large” screen size). The reason is simple: It looks better that way.

Samsung found that if you rendered your graphical resources bit-for-bit using medium-density sources, they looked great, but most large-screen designs ended up looking sparse, with too much space between buttons and icons. At high resolution, the framework scales up the resources an amount that turns out to be just enough.

As a photography hobbyist, I’m reminded of how you juggle aperture and shutter speed and ISO sensitivity. If, for example, you want a fast shutter speed to capture a dancer in mid-leap, you’d better compensate with a wider aperture or more sensitivity. Similarly, the Galaxy Tab’s screen is at the large end of “large”, so declaring it as high-density applies a useful compensation.

The good news is that the scaling code in the framework is smart enough and fast enough that it comes out well; the graphics in my own apps look remarkably good on the Tab. Here is the front page of my “LifeSaver 2” app; first the Nexus One, then the Galaxy Tab, resized for presentation here. Different densities, different geometries, and the only important difference is that the version on the big screen looks prettier.

Your take-away should be what I said above: Make sure you provide your graphics at all three resolutions, and chances are the Android framework will find a way to make them look great on a huge variety of devices.

Other Ways To Go Wrong

As I noted, most apps work just fine on this kind of device, out of the box, no changes required. However, we have run across a few Worst Practices that can make your app look dorky or even broken; for example:

  • Using AbsoluteLayout; this is a recipe for trouble.

  • Using absolute rather than density-independent pixels.

  • One member of my group ran across a couple of apps that suffered a Null Pointer Exception because they were calculating screen size when their Activity started, and doing their own resource loading rather than letting the framework take care of it. The problem was that they hadn't built in handling for the 1024x600 screen. The problem would vanish if they'd hand the work to the framework (or at least make sure that all their switch statements had default cases).

Escape the Shoebox

I've observed that a certain number of applications appear “shoeboxed”, running in a handset-like number of pixels in the center of the screen, surrounded by a wide black band. They work fine, but this is silly, and easy to avoid. It turns out that this happens when you have a targetSdkVersion value less than four; this is interpreted to mean that you’re targeting the legacy Cupcake flavor of Android, which only supported HVGA.

In any case, if you want to make 100% sure that your app doesn’t get pushed into the shoebox, the supports-screens element is your friend; here’s what we recommend:

<supports-screens android:largeScreens="true" android:anyDensity="true" />

(Both those attributes default to "false" for API levels less than 4.) Given a chance, the framework gets a good result on almost any Android screen imaginable.

Testing

When a device comes along that’s different in one way or another from what’s been available before, and you don’t have one, the only way to be sure your app will treat it properly is to run it on an Android emulator; the emulator code is flexible enough to model anything we’ve seen or know is coming down the pipe.

In the case of the Galaxy Tab, Samsung will be providing an add-on including a custom AVD and skin as an SDK add-on, to make your life easier; I used a pre-release to make the LifeSaver screenshot above.

Why All the Extra Work?

Because, as 2010 winds down, Android isn’t just for phones, and isn’t just for things that fit in your pocket. The minor effort required to deal with this should pay off big-time in terms of giving your apps access to a universe of new kinds of devices.

Thursday, September 9, 2010

One Screen Turn Deserves Another

[This post is by Dan Morrill, Open Source & Compatibility Program Manager. — Tim Bray]

Android has an API for accessing a variety of sensor types, such as an accelerometer or light sensor. Two of the most commonly-used sensors are accelerometers and magnetometers (that is, compasses.) Applications and devices frequently use these as forms of user input, and to determine which way to orient the screen.

However, there’s a new wrinkle: recently, a few devices have shipped (see here and here) that run Android on screens that are naturally landscape in their orientation. That is, when held in the default position, the screens are wider than they are tall. This introduces a few fairly subtle issues that we’ve noticed causing problems in some apps. Now, part of the reason for this is that the Android SDK docs on the sensor API left a couple things unsaid, leading many developers to use them incorrectly. Even a couple of our own samples did the wrong thing. Sorry about that!

Fortunately, using these APIs correctly is pretty simple, if you keep three rules in mind:

  • The sensor coordinate system used by the API for the natural orientation of the device does not change as the device moves, and is the same as the OpenGL coordinate system.


  • Applications must not assume that the natural orientation is portrait. That's not true on all devices.


  • Applications that match sensor data to on-screen display must always use android.view.Display.getRotation() to map sensor coordinates to screen coordinates — even if their manifest specifies portrait-only display.


If you have a strong background in math, the three rules above may be all you need to work out the rest. But if that’s not you, the rest of this post explains things step-by-step, and gives some tips for using sensors correctly.

The Basic Problem

Before we dive in, here’s a tip that I personally have found to be helpful: always remember that the sensor data’s coordinate system never changes. Ever. The rest of this post is going to talk about coordinate systems and rotations and so on. But sometimes when your head is deep in 3D transforms, you can get disoriented, so I’ve found it helps to frequently remind myself that no matter what is happening to the screen, the sensor coordinate system never changes.

Now with that tip in mind, we need an example to talk about. Let’s consider a simple app that draws an arrow that always points in the direction of gravity, animating the arrow as the user moves the phone around, like a plumb-bob. When a typical phone is held normally, the arrow points down, as shown in Figure A:

(Note: In the figures in this post, the letter “G” means the direction of gravity in the sensor coordinate system. In Figure A, for example, “G = -y” means that gravity is aligned with the device’s negative-Y axis, as measured by the accelerometer. And remember — the sensor coordinate system never changes!)

This app is pretty straightforward to implement in OpenGL: you simply need to draw an arrow on a GL SurfaceView, after rotating the coordinate space in response to the sensor data returned by the accelerometer. This “just works” because — in this basic case — the OpenGL screen coordinate system lines up with the sensor coordinate system.

So, this technique works, and the arrow will always point down — until you turn the phone too far.

So What’s the Problem?

Most Android devices use the accelerometer to detect when the device is being held sideways, and rotate the screen accordingly. This normally causes the apps to display horizontally, from the point of view of the user.

What this reorientation actually does is remap the X and Y axes, causing the app to draw itself horizontally. However, the Android sensor APIs define the sensor coordinate space to be relative to the top and side of the device — not the short and long sides. When the system reorients the screen in response to holding the phone sideways, the sensor coordinate system no longer lines up with the screen’s coordinate system, and you get unexpected rotations in the display of your app. Figure B shows an example:

There are are a couple different fixes for this problem that are commonly used today, but we’ve noticed that these often don’t work properly on landscape-default devices.

A common first attempt to solve the auto-rotation problem is to simply lock the screen to portrait mode, via the android:screenOrientation attribute in AndroidManifest.xml. This prevents the system from performing a screen coordinate system remap in response to device orientation, and so the sensor and screen coordinate systems remain in sync. However, locking the screen to portrait mode this way prevents the coordinate systems from getting out of sync on portrait-default devices, but causes them to become out of sync on landscape-default devices. This is because it forces a screen reorientation on those devices.

The second common technique is to detect when the device is in landscape mode, and compensate for it by adding a rotation to the graphics that are displayed. Unfortunately, this technique is often only a partial fix, because if you aren’t careful about detecting landscape mode, you will again cause an unnecessary compensation on landscape-default devices.

The Correct Fix

So what’s a poor developer to do? This seems like a catch-22: you can’t prevent screen reorientation, but you can’t compensate for it, either.

Or can you? Actually, you can compensate — you just have to make sure you’re correctly detecting when compensation is necessary. The question is, how does the device tell you that it’s been reoriented? And the answer is: android.view.Display.getRotation().

That method will return one of four values, indicating that either the device has not been reoriented (ROTATION_0), or that it has been reoriented by 90 degrees, 180 degrees, or 270 degrees (which respectively are ROTATION_90, ROTATION_180, and ROTATION_270.)

Pay special attention to those last two. ROTATION_180 and ROTATION_270 mean that each device actually has two portrait and two landscape modes: normal portrait and landscape, and the upside-down versions of each. Some Android devices that do “360 reorientation” will use these rotation modes as well, so you need to handle this generally, beyond just accounting for portrait or landscape mode.

Once you have the screen orientation info in hand, you can treat it as a rotation around the screen’s Z axis when rendering graphics. By applying the rotation to the values you get from your SensorEventListener, you can correctly and reliably compensate for screen reorientations on all devices.

Note that Display.getRotation() will tell you if the screen has been reoriented at all, not that it was reoriented specifically in response to the accelerometer. For example, even if you disable accelerometer-based reorientation by using android:screenOrientation=”nosensor", your app might still be reoriented if the user has opened a hard keyboard on the device.

Because handling all this involves some math that can be a bit of a chore, as a convenience we’ve provided the android.hardware.sensor.SensorManager.remapCoordinateSystem() method to do much of this remapping work for you. If you choose not to do use this method, you can achieve a similar effect by essentially swapping axes, along with the rule of thumb that 2 axis swaps requires that you negate the third axis. (Since this is a bit error-prone, we do recommend that you use remapCoordinateSystem() when you can.)

Recipes for Sensuous Delights

Okay, now we’ve got a technique that we can rely on to work on all devices. But how do you update your app? To give you a more explicit helping hand on how to fix your apps, I’ve whipped up a few recipes for updating your apps.

Apps That Never Draw Sensor Data

Apps that never display graphics derived from sensor data usually don’t need to make any changes. Examples of this type of app are those that detect for bumps to the device, those that use sensors for gesture input, apps that monitor g-forces (watching for free-fall or acceleration), and so on. These apps aren’t drawing images that vary according to the device’s orientation.

This isn’t a hard and fast rule; there probably are some apps out there that do need to take screen orientation into consideration, even though they don’t draw graphics depicting the sensor data. But, if your app just uses sensors in the background, there’s a good chance you won’t need to make any changes.

Apps That Work in Both Portrait and Landscape

Most Android apps work fine in both portrait and landscape, using the standard tools. If your app is one of these and you also use sensors, the only change your app probably requires is a tweak to use the behavior I outlined above. That is:

  • Don’t assume that portrait is the default mode.


  • Don’t assume that locking your app to portrait mode solves this issue.


  • Don’t assume that disabling sensor-based reorientation solves this issue (since reorientations also occur on some devices when the user opens a keyboard.)


  • Check for the current device orientation via getRotation(), and compensate accordingly, as detailed earlier.


Apps That Only Work in One Orientation

Some apps — notably, many games — only work well (or at all!) in either portrait or landscape mode. It’s perfectly okay, of course, for such apps to lock themselves to the appropriate mode, and doing so simplifies the sensors quite a bit.

However, because Android devices actually support two landscape and two portrait modes, these apps still need to check the current orientation. That is, if an app locks itself to landscape mode, it will need to perform a compensation on portrait-default devices, but not on landscape-default devices. And of course — are you sick of hearing this yet? — this can be accomplished by checking the result of getRotation().

Phew! Quite a mouthful for what is a fairly straightforward notion, once you understand what’s going on. But if I had to distill all that down into a single sentence, it would be this: android.view.Display.getRotation() is your friend.

I hope you’ve found this information useful; what’s more, I hope you’ve found it practical. We’ll keep improving our SDK and docs, and I hope you’ll keep improving your apps.

Happy coding!

Wednesday, September 1, 2010

Brace for the Future

[This post is by Dan Morrill, Open Source & Compatibility Program Manager. — Tim Bray]

Way back in November 2007 when Google announced Android, Andy Rubin said “We hope thousands of different phones will be powered by Android.” But now, Android’s growing beyond phones to new kinds of devices. (For instance, you might have read about the new 7” Galaxy Tab that our partners at Samsung just announced.) So, I wanted to point out a few interesting new gadgets that are coming soon running the latest versions of Android, 2.1 and 2.2.

For starters, the first Android-based non-phone handheld devices will be shipping over the next few months. Some people call these Mobile Internet Devices or Personal Media Players — MIDs or PMPs. Except for the phone part, PMP/MID devices look and work just like smartphones, but if your app really does require phone hardware to work correctly, you can follow some simple steps to make sure your app only appears on phones.

Next up are tablets. Besides the Samsung Galaxy Tab I mentioned, the Dell Streak is now on sale, which has a 5” screen and blurs the line between a phone and a tablet. Of course, Android has supported screens of any size since version 1.6, but these are the first large-screen devices to actually ship with Android Market. A tablet’s biggest quirk, of course, is its larger screen.

It’s pretty rare that we see problems with existing apps running on large-screen devices, but at the same time many apps would benefit from making better use of the additional screen space. For instance, an email app might be improved by changing its UI from a list-oriented layout to a two-pane view. Fortunately, Android and the SDK make it easy to support multiple screen sizes in your app, so you can read up on our documentation and make sure your app makes the best use of the extra space on large screens.

Speaking of screen quirks, we’re also seeing the first devices whose natural screen orientation is landscape. For instance, Motorola’s CHARM and FLIPOUT phones have screens which are wider than they are tall, when used in the natural orientation. The majority of apps won’t even notice the difference, but if your app uses sensors like accelerometer or compass, you might need to double-check your code.

Now, the devices I’ve mentioned so far still have the same hardware that Android phones have, like compass and accelerometer sensors, cameras, and so on. However, there are also devices coming that will omit some of this hardware. For instance, you’ve probably heard of Google TV, which will get Android Market in 2011. Since Google TV is, you know, a stationary object, it won’t have a compass and accelerometer. It also won’t have a standard camera, since we decided there wasn’t a big audience for pictures of the dust bunnies behind your TV.

Fortunately, you can use our built-in tools to handle these cases and control which devices your app appears to in Android Market. Android lets you provide versions of your UI optimized for various screen configurations, and each device will pick the one that runs best. Meanwhile, Android Market will make sure your apps only appear to devices that can run them, by matching those features you list as required (via tags) only with devices that have those features.

Android started on phones, but we’re growing to fit new kinds of devices. Now your Android app can run on almost anything, and the potential size of your audience is growing fast. But to fully unlock this additional reach, you should double-check your app and tweak it if you need to, so that it puts its best foot forward. Watch this blog over the next few weeks, as we post a series of detailed “tips and tricks” articles on how to get the most out of the new gadgets.

It’s official folks: we’re living in the future! Happy coding.

Securing Android LVL Applications


[This post is by Trevor Johns, who's a Developer Programs Engineer working on Android. — Tim Bray]

The Android Market licensing service is a powerful tool for protecting your applications against unauthorized use. The License Verification Library (LVL) is a key component. A determined attacker who’s willing to disassemble and reassemble code can eventually hack around the service; but application developers can make the hackers’ task immensely more difficult, to the point where it may simply not be worth their time.

Out of the box, the LVL protects against casual piracy; users who try to copy APKs directly from one device to another without purchasing the application. Here are some techniques to make things hard, even for technically skilled attackers who attempt to decompile your application and remove or disable LVL-related code.

  • You can obfuscate your application to make it difficult to reverse-engineer.

  • You can modify the licensing library itself to make it difficult to apply common cracking techniques.

  • You can make your application tamper-resistant.

  • You can offload license validation to a trusted server.

This can and should be done differently by each app developer. A guiding principle in the design of the licensing service is that attackers must be forced to crack each application individually, and unfortunately no client-side code can be made 100% secure. As a result, we depend on developers introducing additional complexity and heterogeneity into the license check code — something which requires human ingenuity and and a detailed knowledge of the application the license library is being integrated into.

Technique: Code Obfuscation

The first line of defense in your application should be code obfuscation. Code obfuscation will not protect against automated attacks, and it doesn’t alter the flow of your program. However, it does make it more difficult for attackers to write the initial attack for an application, by removing symbols that would quickly reveal the original structure of a compiled application. As such, we strongly recommend using code obfuscation in all LVL installations.

To understand what an obfuscator does, consider the build process for your application: Your application is compiled and converted into .dex files and packaged in an APK for distribution on devices. The bytecode contains references to the original code — packages, classes, methods, and fields all retain their original (human readable) names in the compiled code. Attackers use this information to help reverse-engineer your program, and ultimately disable the license check.

Obfuscators replace these names with short, machine generated alternatives. Rather than seeing a call to dontAllow(), an attacker would see a call to a(). This makes it more difficult to intuit the purpose of these functions without access to the original source code.

There are a number of commercial and open-source obfuscators available for Java that will work with Android. We have had good experience with ProGuard, but we encourage you to explore a range of obfuscators to find the solution that works best for you.

We will be publishing a separate article soon that provides detailed advice on working with ProGuard. Until then, please refer to the ProGuard documentation.

Technique: Modifying the license library

The second line of defense against attack from crackers is to modify the license verification library in such a way that it’s difficult for an attacker to modify the disassembled code and get a positive license check as result.

This actually provides protection against two different types of attack: it protects against attackers trying to crack your application, but it also prevents attacks designed to target other applications (or even the stock LVL distribution itself) from being easily ported over to your application. The goal should be to both increase the complexity of your application’s bytecode and make your application’s LVL implementation unique.

When modifying the license library, there are three areas that you will want to focus on:

  • The core licensing library logic.

  • The entry/exit points of the licensing library.

  • How your application invokes the licensing library and handles the license response.

In the case of the core licensing library, you’ll primarily want to focus on two classes which comprise the core of the LVL logic: LicenseChecker and LicenseValidator.

Quite simply, your goal is to modify these two classes as much as possible, in any way possible, while still retaining the original function of the application. Here are some ideas to get you started, but you’re encouraged to be creative:

  • Replace switch statements with if statements.

  • Use XOR or hash functions to derive new values for any constants used and check for those instead.

  • Remove unused code. For instance, if you’re sure you won’t need swappable policies, remove the Policy interface and implement the policy verification inline with the rest of LicenseValidator.

  • Move the entirety of the LVL into your own application’s package.

  • Spawn additional threads to handle different parts of license validation.

  • Replace functions with inline code where possible.

For example, consider the following function from LicenseValidator:

public void verify(PublicKey publicKey, int responseCode, String signedData, String signature) {
// ... Response validation code omitted for brevity ...
switch (responseCode) {
// In Java bytecode, LICENSED will be converted to the constant 0x0
case LICENSED:
case LICENSED_OLD_KEY:
LicenseResponse limiterResponse = mDeviceLimiter.isDeviceAllowed(userId);
handleResponse(limiterResponse, data);
break;
// NOT_LICENSED will be converted to the constant 0x1
case NOT_LICENSED:
handleResponse(LicenseResponse.NOT_LICENSED, data);
break;
// ... Extra response codes also removed for brevity ...
}

In this example, an attacker might try to swap the code belonging to the LICENSED and NOT_LICENSED cases, so that an unlicensed user will be treated as licensed. The integer values for LICENSED (0x0) and NOT_LICENSED (0x1) will be known to an attacker by studying the LVL source, so even obfuscation makes it very easy to locate where this check is performed in your application’s bytecode.

To make this more difficult, consider the following modification:

public void verify(PublicKey publicKey, int responseCode, String signedData, String signature) {
// ... Response validation code omitted for brevity …

// Compute a derivative version of the response code
// Ideally, this should be placed as far from the responseCode switch as possible,
// to prevent attackers from noticing the call to the CRC32 library, which would be
// a strong hint as to what we're done here. If you can add additional transformations
// elsewhere in before this value is used, that's even better.
java.util.zip.CRC32 crc32 = new java.util.zip.CRC32();
crc32.update(responseCode);
int transformedResponseCode = crc32.getValue();

// ... put unrelated application code here ...
// crc32(LICENSED) == 3523407757
if (transformedResponse == 3523407757) {
LicenseResponse limiterResponse = mDeviceLimiter.isDeviceAllowed(userId);
handleResponse(limiterResponse, data);
}
// ... put unrelated application code here ...
// crc32(LICENSED_OLD_KEY) == 1007455905
if (transformedResponseCode == 1007455905) {
LicenseResponse limiterResponse = mDeviceLimiter.isDeviceAllowed(userId);
handleResponse(limiterResponse, data);
}
// ... put unrelated application code here ...
// crc32(NOT_LICENSED) == 2768625435
if (transformedResponseCode == 2768625435):
userIsntLicensed();
}
}

In this example, we’ve added additional code to transform the license response code into a different value. We’ve also removed the switch block, allowing us to inject unrelated application code between the three license response checks. (Remember: The goal is to make your application’s LVL implementation unique. Do not copy the code above verbatim — come up with your own approach.)

For the entry/exit points, be aware that attackers may try to write a counterfeit version of the LVL that implements the same public interface, then try to swap out the relevant classes in your application. To prevent this, consider adding additional arguments to the LicenseChecker constructor, as well as allow() and dontAllow() in the LicenseCheckerCallback. For example, you could pass in a nonce (a unique value) to LicenseChecker that must also be present when calling allow().

Note: Renaming allow() and dontAllow() won’t make a difference, assuming that you’re using an obfuscator. The obfuscator will automatically rename these functions for you.

Be aware that attackers might try and attack the calls in your application to the LVL. For example, if you display a dialogue on license failure with an “Exit” button, consider what would happen if an attacker were to comment out the line of code that displayed that window. If the user never pushes the “Exit” button in the dialog (which is no not being displayed) will your application still terminate? To prevent this, consider invoking a different Activity to handle informing a user that their license is invalid, and immediately terminating the original Activity; add additional finish() statements to other parts of your code that get will get executed in case the original one gets disabled; or set a timer that will cause your application to be terminated after a timeout. It’s also a good idea to defer the license check until your application has been running a few minutes, since attackers will be expecting the license check to occur during your application’s launch.

Finally, be aware that certain methods cannot be obfuscated, even when using a tool such as ProGuard. As a key example, onCreate() cannot be renamed, since it needs to remain callable by the Android system. Avoid putting license check code in these methods, since attackers will be looking for the LVL there.

Technique: Make your application tamper-resistant

In order for an attacker to remove the LVL from your code, they have to modify your code. Unless done precisely, this can be detected by your code. There are a few approaches you can use here.

The most obvious mechanism is to use a lightweight hash function, such as CRC32, and build a hash of your application’s code. You can then compare this checksum with a known good value. You can find the path of your application’s files by calling context.GetApplicationInfo() — just be sure not to compute a checksum of the file that contains your checksum! (Consider storing this information on a third-party server.)

[In a late edit, we removed a suggestion that you use a check that relies on GetInstallerPackageName when our of our senior engineers pointed out that this is undocumented, unsupported, and only happens to work by accident. –Tim]

Also, you can check to see if your application is debuggable. If your application tries to keep itself from performing normally if the debug flag is set, it may be harder for an attacker to compromise:

boolean isDebuggable =  ( 0 != ( getApplcationInfo().flags &= ApplicationInfo.FLAG_DEBUGGABLE ) );

Technique: Offload license validation to a trusted server

If your application has an online component, a very powerful technique to prevent piracy is to send a copy of the license server response, contained inside the ResponseData class, along with its signature, to your online server. Your server can then verify that the user is licensed, and if not refuse to serve any online content.

Since the license response is cryptographically signed, your server can check to make sure that the license response hasn’t been tampered with by using the public RSA key stored in the Android Market publisher console.

When performing the server-side validation, you will want to check all of the following:

  • That the response signature is valid.

  • That the license service returned a LICENSED response.

  • That the package name and version code match the correct application.

  • That the license response has not expired (check the VT license response extra).

  • You should also log the userId field to ensure that a cracked application isn’t replaying a license response from another licensed user. (This would be visible by an abnormally high number of license checks coming from a single userId.)

To see how to properly verify a license response, look at LicenseValidator.verify().

As long as the license check is entirely handled within server-code (and your server itself is secure), it’s worth nothing that even an expert cracker cannot circumvent this mechanism. This is because your server is a trusted computing environment.

Remember that any code running on a computer under the user’s control (including their Android device) is untrusted. If you choose to inform the user that the server-side license validation has failed, this must only be done in an advisory capacity. You must still make sure that your server refuses to serve any content to an unlicensed user.

Conclusion

In summary, remember that your goal as an application developer is to make your application’s LVL implementation unique, difficult to trace when decompiled, and resistant to any changes that might be introduced. Realize that this might involve modifying your code in ways that seem counter-intuitive from a traditional software engineering viewpoint, such as removing functions and hiding license check routines inside unrelated code.

For added protection, consider moving the license check to a trusted server, where attackers will be unable to modify the license check code. While it’s impossible to write 100% secure validation code on client devices, this is attainable on a machine under your control.

And above all else, be creative. You have the advantage in that you have access to a fully annotated copy of your source code — attackers will be working with uncommented bytecode. Use this to your advantage.

Remember that, assuming you’ve followed the guidelines here, attackers will need to crack each new version of your application. Add new features and release often, and consider modifying your LVL implementation with each release to create additional work for attackers.

And above all else, listen to your users and keep them happy. The best defense against piracy isn’t technical, it’s emotional.

Related Posts Plugin for WordPress, Blogger...