Skip to content
Jan 26 14

WinJS helper for converting from WinJS.xhr to HttpClient

by Brandon

This post isn’t strictly part of the “Building Great Apps” series, but it will be referenced there, and since someone was asking about this helper on Twitter I decided to go ahead and post it here.

In Windows 8.1, a new WinRT API was introduced for making HTTP requests. The main runtime class used is Windows.Web.Http.HttpClient.

There are several ways in which this is superior to the standard XmlHttpRequest interface. Unfortunately, for those using WinJS.xhr() today, converting your code to use HttpClient is non-trivial.

In my case, I had to convert all of my XHR calls to use HttpClient to work around a bug in XHR… A deadlock which occurs in IE/Windows’ XHR implementation when used simultaneously from multiple threads (i.e. the UI thread and one or more WebWorkers). I reported the issue to Microsoft and they identified the source of the bug, but rather than wait for them to issue a fix for it, I decided to migrate all my code to HttpClient which doesn’t have this problem.

In some cases, like my Twitter Stream API consumer, I rewrote the whole thing using HttpClient, because it made more sense, and the new implementation is superior in several ways. The original implementation did use WinJS.xhr, but it was a heavily… customized usage.

However, the rest of my code used WinJS.xhr in pretty standard ways. Rather than rewrite any of it,  I decided to just create a drop-in replacement for WinJS.xhr which wraps HttpClient. Better yet, it should detect when WinRT is not available (i.e. if running on Windows Phone 8, iOS, etc), and fall back to the regular WinJS.xhr implementation.

I posted the helper up here as a GitHub Gist:

https://gist.github.com/BrandonLive/8641828

To use it, include that file in your project, and then replace calls to WinJS.xhr with BrandonJS.xhr.

One caveat is that I couldn’t find a way to inspect a JavaScript multipart FormData object in order to build the equivalent HttpMultipartFormDataContent representation. You should still be able to use the helper, you’ll just need to create one of those instead of a FormData object and pass that as the request data in its place.

If you find any problems or want to submit any improvements, please do so on the Gist page!

Jan 23 14

My MVVM rant

by Brandon

As I said in the last post, XAML fans tend to favor the MVVM pattern. I’ve had only limited experience with this, but so far I’m personally not a fan. Let me be blunt. The MVVM projects I’ve seen tend to be over-engineered feats of data-binding hell. If you’ve ever thought, “those bits of ‘glue’ code I have to write are kind of annoying, so why don’t I spend a bunch more time making even more of it,” then this may be the option for you.

Okay, maybe that’s not fair. I’m sure some people make great use of MVVM. Others use the MVC pattern with XAML just fine. Both patterns have the same basic goals of separating your UI stuff from your other stuff, and theoretically making it easier for non-developers to do UI work, or to use fancy tools like Blend to do their UI work, or to build different UIs for different target environments (i.e. adapting to different screen sizes, or making your app feel “at home” on systems with different design conventions).

Personally, I don’t buy it. I get the vision, but I’ve never seen it work. I’ve seen projects crumble under the weight of unnecessary code supporting these patterns, and I’ve seen a lot of cases where development teams start out with a grand MVC vision and end up with a puzzling mash-up of excess components, plumbing gymnastics, and endless cases where someone was in a hurry and just bypassed the whole damn abstraction.

And best of all, I’ve never seen the proposed benefits actually bear fruit. Designers don’t write XAML, and most of these projects end up with exactly one view implementation. Am I wrong? Tell me in the comment section 🙂

Maybe the problem isn’t so much the pattern itself, but the scope at which it’s applied. The projects I’ve seen struggle tend to try and carve these horizontal delineations across an entire codebase. As you’ll see in later Building Great #WinApps posts, I don’t take this approach. While I do separate the UI from the data, I tend to think of the result more as a set of vertically integrated controls, with the necessary interfaces to be composed with each other as the project requires. Maybe it’s just a pedantic philosophical difference. Or maybe you MVVM fans just won’t like the way I build things. Keep following the series if you want to find out!

/rant

Jan 22 14

Building Great #WinApps: Platforms a plenty

by Brandon

One of the greatest things about developing for Windows can also be a challenge. On competitive platforms you can find options. On Windows, you’re given options. And, well, many of them. I’m going to start with a quick rundown of how I see the competitive environments, then dive into your options on Windows 8/RT. Feel free to skip ahead if that’s all you’re interested in finding.

iOS

On iOS the only first-class platform you have available is Objective C with Cocoa Touch. To a C++ (+JS / C# / etc) developer like me, examples of this code might as well be written in Cyrillic. That’s just at first glance though, I expect it’s not that hard to pick up. Weird as it may look.

Of course, you do have other options. They’re just not ones Apple cares about. Or in some cases, options they actively try to punish you for choosing. The two that come to mind for me are HTML/JS apps where you package up your code and mark-up into a native package and host it in a WebView (maybe using something like PhoneGap), or Xamarin. Okay, maybe Adobe has some thing too. If you care about that, you know about it.

HTML apps can, of course, call out to their Objective-C hosts, which lets you write some platform-specific, optimized native code when it is called for. It’s just clumsy.

Android

Android developers are roughly in the same boat, in that you get one real first-party option: Java, with a proprietary UI framework. For someone like me, this gives the advantage of having a familiar looking and working language (I tend to think of it as “lesser C#”). They do offer the NDK which I understand lets you write native C/C++ code if you want to avoid GC overhead and other Java pitfalls, though I think you’re still required to use Java to interact with the UI framework. Also, its use seems to be discouraged for most situations.

Otherwise, the same alternatives as iOS apply (HTML/JS, Xamarin, maybe some Adobe thing if you’re that sort of person). I actually don’t know if they hinder JS execution speed in the way that iOS does, but they certainly don’t do anything to make HTML/JS apps easy or efficient. In fact, my understanding is that it’s actually substantially harder to do this on Android than on iOS, primarily because of their fragmentation problem. CSS capabilities vary hugely from one version of Android to the next. And if you want to target the majority of Android users, you need to be able to run on a pretty sad and ancient set of web standards and capabilities. Ironic from the company that makes Chrome OS, but that’s reality.

Windows Phone 8

Windows Phone 8 is actually quite a lot like Android in this regard. By default you get first-party support for a managed language (C#) with a proprietary UI framework and some ability to write native C++ code if you wish.

Once again, you can write HTML/JS apps (and of course Xamarin could be described as either “at home” or maybe just “not applicable” here). HTML/JS apps here are a lot like those on Android, with a few advantages:

1) Every user has the same modern IE10 rendering engine with great support for CSS3 and some things that even desktop Chrome still doesn’t have, like CSS Grid (which is a truly wonderful thing).

2) JS runs efficiently and I believe animations can run independent of the UI thread.

3) Since the engine is the same as desktop IE10, you can build, test, and debug against that, which means you don’t have to deal with the weight of the phone emulator for much of your development process.

In other ways it’s more rough. There’s no real way to debug or DOM inspect the phone or even the emulator. I think iOS and Android may have some advantages in that particular regard.

Mobile web app blues

On each of the above platforms, web apps face various problems. On at least iOS, your JavaScript runs at about 1/10th speed, mostly because Apple hates you. Err, wants you to use Objective C and Cocoa Touch.

Some common problems are:

A) You need to write a host process. On Android and WinPhone this is in Java or C#, which means you do have two garbage collectors and two UI frameworks to load up. Though fortunately the CLR/Java garbage collector shouldn’t have much of anything to actually do.

B) The WebView controls on these platforms (including WP8) tend to suck at certain things like handling subscrollers. They’re less common on the web, but immensely useful for app development.

C) None of them give you direct access to the native platform, outside of what’s in their implementation of HTML5 / CSS3 standards.

Windows 8

Windows 8 (/8.1/RT) is different. Here you actually get first-party options. In fact, not only are there options for programming language and UI framework, but you can mix and match many of them.

UI Frameworks

  Direct3D XAML HTML
UI languages C++ C++
C#
JavaScript
Backend languages C++
C#
C++
C#
(JS, theoretically)
JavaScript
C#
C++

 

That’s a lot of possible combinations, and the paradox of choice very much comes into play. What’s great is that a lot of developers can use tools, languages, and UI frameworks they’re familiar with. What’s not so great is that the number of choices, and understanding how each of them work, can be overwhelming.

What is WinRT?

WinRT, or the Windows Runtime, describes a set of APIs exposed by Windows to any of the languages described above. The beauty of this is that when a team at Microsoft decides to offer functionality to Windows developers, they define and implement it once, and the WinRT infrastructure “magically” makes it available in a natural form to C++, C#/.NET, and JavaScript code running in a Windows 8 app.

I’ll dive more into what the Windows Runtime is and how it works in a later post, if there’s interest. Though I believe there are already articles around the web already which describe it well enough.

Suffice to say that WinRT APIs are not designed to be portable and are not part of any kind of standard. These are equivalent to the native iOS and Android APIs, which let you do things like access file storage, custom hardware devices, or interact with platform-specific features like the Share charm, live tiles, and so on.

So then what is WinJS?

Despite some misconceptions I’ve heard, WinJS is not a proprietary version of JavaScript, or anything at all like that. Its full name is the Windows Library for JavaScript, and it’s just a JS and CSS library. Literally, it’s made up of base.js, ui.js, ui-light.css, and ui-dark.css. Those files contain bog standard JavaScript and CSS, and you can peruse them at your choosing (and debug into them, or override them via duck punching, when needed). It’s a lot like jQuery, and offers some of the same functionality, as well as several UI controls (again, written entirely in JS) which fit the platform’s look and feel.

Technically you don’t need to use any of it in your JS app, but there are a few bits you’ll almost certainly want to use for app start-up and dealing with WinRT APIs (i.e. WinJS’s Promise implementation, which works seamlessly with async WinRT calls). The rest is up to you, though I find several pieces to be quite useful.

I’ll dig into WinJS and the parts I use (and why) in a later post. But hopefully that summary is helpful here.

Got it, so which option should I choose?

That table above suggests a sizable matrix to evaluate. I wrestled with how to best present my own boiled down version of this (I’ve in fact just rewritten this whole section three times). I looked at presenting this in a “If you’re an X, consider Y and maybe Z” format. But that’s challenging, particularly as someone who came into mobile app developing having had differing degrees of familiarity with a variety of platforms.

Instead I’m actually going to focus on the merits of each option I present, and call out how your background might factor in as appropriate. First up is…

C++ and Direct3D

This is the “closer to the metal” option and most appropriate for high-performance 3D games. It’s also a perfectly valid option for 2D games, especially if you’re already familiar with C++ and/or D3D, or even OpenGL. This is one of the easier options to rule in or out. It gives you the best performance potential, but is likely the most work, the most specialized, and least portable to non-MS platforms (relative to the other options here).

C++ and XAML

First off, if you’re a C/C++ developer and you’ve hated that for ages now Microsoft has kept its best UI frameworks to itself (or reserved them for those crazy .NET folks), rejoice! Beginning with Windows 8, Microsoft has realized that some people like C++ but also enjoy living in this this century with things like an object-oriented control and layout model and markup-based UI.

If you choose this option, your code will run faster than if you wrote it in C# or JavaScript. It just will. You get a modern UI framework with a solid set of controls and good performance. Itself fully native C++ code, the XAML stack renders using Direct3D, supports independent (off-thread) animations, and integrates with Windows 8’s truly great DirectManipulation independent input system. In Win8.1 I think it may even use some DirectComposition and independent hit testing magic that JS apps have had since 8.0. Maybe.

XAML fans tend to favor the MVVM pattern. Others use MVC with it, with varying degrees of success. Honestly, I’m not a huge fan of either. I know that’s blasphemy to a lot of people. I wrote a little rant here on the subject, but I’ll spare you and publish it as a separate post later. Maybe someone will read it and explain the error of my ways 😉

C# and XAML

Here you find basically the same thing as above, with these differences:

  1. It’s going to be slower. The largest impacts are:
    • Start-up. 
      Loading the CLR and having it JIT compile your code adds start-up time you don’t have as a C++ app.
    • Garbage collection.
      There’s just no getting around the fact that GC overall adds overhead and limits your ability to optimize.
      Depending on what you’re doing, the performance difference from C++ may or may not be noticeable.
  2. It’s going to be easier. Particularly if you don’t already know C++, and very much so if you already know C# or Java.
  3. It is for now the easiest way to share code with Windows Phone.
  4. It’s got killer tools, both in VS and extensions.

If you’re already a C# + XAML developer, this should be compelling. You get to use the language you (probably) love, the markup language you know, and a good chunk of the .NET API set you’re used to.

Yeah, there are some differences if you’re coming from WPF or Silverlight (or the phone version of Silverlight). It should still be an easy enough transition, and it’s cool that it’s a native code implementation and available to C++ developers (and thus now starting to be adopted inside Windows itself). Complain all you want about the N different versions they’ve confused you with over the years, but know that this is absolutely the version of XAML that Microsoft is taking forward.

JavaScript and HTML (and maybe some C++!)

If you know JavaScript, HTML, and CSS, this gets familiarity points. Contrary to some misconceptions, JavaScript apps on Windows are, in fact, real JavaScript. The major differences from writing a web app hosted in a WebView are:

  1. Windows provides the host executable, wwahost.exe. You don’t need to provide one. Your app is literally a zip file of JS, HTML, and CSS files, along with a manifest and any resources you choose to include (including, optionally, any DLLs of C# or C++ code you want to invoke).
  2. They’re faster, because the OS does smart things like caching your JS as bytecode at install time, and letting it participate in optimization services like prefetch and superfetch.
  3. You get direct access to the full set of Windows Runtime APIs.
  4. You can use the WinJS library of JS and CSS without having to directly include the files in your app (all apps share a common installation of it on the end user’s machine).

What if you’ve never used JS before? First off, forget anything you’ve heard about JavaScript. Since we’re talking Windows apps, you don’t have to worry about different web browsers, or ancient JavaScript limitations, or the perils of IE6’s CSS implementation. Yes, you’ll run into bugs your C++ or C# compiler would have caught, and refactoring can sometimes be a challenge. But you’ll also gain several benefits:

  1. A really productive, dynamic language which works really well with the web.
  2. A fast, lightweight, flexible UI framework which makes so many things fast and easy, as I’ll describe later.
  3. More portability. I’ll cover this more in a later post, but if you forego a few niceties not available on other platforms (i.e. my beloved CSS Grid), much of your app will be really easy to port to other platforms or the web. Yes, much more portable than Xamarin.
  4. Tons of documentation (hell, I reference MDN daily) and example code out on the web.
  5. Great tools. Both in VS (and Blend I hear, if you’re into that sort of thing), and from third parties. Seriously, you’ve got to love web-based tools like this awesome one for analyzing performance characteristics of different implementations.
  6. Lots of portable libraries like jQuery, KnockoutJS, AngularJS, and so on. Drop them into your app and go.
  7. You can use PhoneGap to make your app even more portable, as it wraps various WinRT call in a common wrapper that also works on WP8, iOS, and Android.
  8. If you don’t know it already, you’ll learn a valuable new skill for your repertoire, and one that’s not tied to any particular vendor’s whims.

You also get a few advantages from going with HTML over XAML:

  1. Start-up performance. Surprised? HTML/JS apps can actually start faster than equivalent C# + XAML apps. Well, on Windows anyway. Weird, right?
  2. UI performance. XAML got better in 8.1, but there are still several cases where the HTML engine is just better. Better at animations (within the realm of accelerated CSS3 animations and transitions), better at scrolling/panning, and better at other touch manipulations like swipe and drag+drop. In 8.0 the ListView control was faster (though maybe XAML caught up in 8.1).
  3. It’s super easy to consume HTML from web services, including your own. And because there are so many great tools for building and manipulating HTML outside of a browser, you can really easily write a server that passes down markup your app will render natively, without having to load a separate WebView.

There are other things, too. I like the WinJS async model better. You may not. I like the templating model. I like that everything including my UI is generally portable. I prefer CSS to the way XAML handles styling, and find it easier to write responsive UI (in the modern web sense of “responsive” – i.e. adapting to variable screen sizes) . I like that it’s being pushed forward by titans other than just Microsoft. Oh, and it has some built-in controls that XAML lacks.

Regardless of your background, if you’ve got a few days to experiment, I highly recommend taking up a project like writing a simple app in JavaScript and HTML. In fact, in a later post I’ll walk through one from the perspective of someone with little or no JS/HTML/CSS knowledge. You might just find it fun, and be surprised by the result.

Plus, if you’re writing anything performance intensive (for example, some custom image processing routines), or some IP you want to keep obscured from prying eyes, you can do that part in C++. Windows makes it incredibly easy to write a bunch of custom C++ code that crunches data and then return that to your JavaScript frontend. It’s kind of brilliant, and I highly recommend checking it out. I’m sure I’ll give an example of that at some point too.

There are downsides. Unfortunately, even in 8.1, WebWorkers don’t support transferable objects, so anything you pass between threads gets copied. You also can’t share a WebWorker among multiple windows (without routing through one of said windows). With C++ or C# you get more control over threading and memory. The advantage in JavaScript is you don’t have to worry about locking, as the platform enforces serialized message passing as the only cross-thread communication mechanism.

The real answer

Most apps (which aren’t 3D games) are going to use either C#+XAML or HTML+JS. Which one you choose will probably depend a lot on your background and personal preferences. Both platforms can be used to create great, fast, beautiful, reliable, maintainable apps. Both can let you be extremely productive as a developer. Both look to have bright futures.

Microsoft used JS apps for most of the in-box Windows apps. The Mail client in Windows 8.1, the Xbox apps, and the Bing apps (except Maps). The Xbox One uses it for, as far as I can tell, everything. In other words, it’s not going anywhere, even if some folks suggest otherwise.

My answer

In my case, I was primarily a C++ developer who was an expert in Microsoft’s internal UI framework, which I’d used for years. I had built WinForms C# apps many years ago, and dabbled in WPF and Silverlight here and there (even making a rough Twitter app in the latter at one point!). The most extensive experience I had with WPF was when I was at Microsoft and was loaned out to help a struggling team with their project. I found that WPF/XAML had a few neat tricks, but I didn’t exactly fall in love with it.

I’d also had a bit of experience with web development, but it was pretty limited. I’d customized some WordPress themes and written some little JS slideshow / flipper sort of things, but nothing sophisticated, and I really had only known just enough about the language to get the job done.

When I decided to build my first Windows 8 app, 4th at Square, I chose JS because I wanted my app to be fast, because I wanted the project to be a fun learning experience, and because I wanted to see what all the fuss was about. As my first app (which I haven’t touched in a while now) I won’t point to it as a pinnacle of design. Or, well, anything. I do use it nearly every day though. And it met all of my goals. It was fast, even on a first-gen Surface RT. It was fun and educational. And I found myself liking a lot about that way of building an app.

Jan 22 14

Building Great #WinApps: Introduction

by Brandon

Recently I’ve had a few requests to share my experiences building Tweetium, Newseen, Cattergories, and 4th at Square. I’ve decided to jump back into the world of blogging with a series I’m going to not-so-humbly title “Building great #WinApps.” This post serves as an introduction to the series.

What will I cover?

A bit of everything. Here’s a sampling of topics I have planned:

  • Overview of the programming language and UI platform options
    • What factors should you consider?
    • Why did I choose WinJS?
  • The Win8 JS app model (or, “What the heck *is* WinJS anyway?”)
  • Building a multi-threaded JS app
  • Nailing start-up performance
  • Architecture overview for Newseen and/or Tweetium
  • Designing the UX for Newseen and Tweetium
  • Building Tweetium’s flexbox-based grid view
  • Running a WinJS app on the web or other platforms
  • Workarounds for platform bugs and idiosyncrasies I’ve encountered
  • Debugging tricks and tips

Those are just some of the topics I’ve been thinking of writing about. There will certainly be others, and some of those may get merged or reframed a bit along the way.

Will there be code?

You betcha! I plan to post examples as well as some utilities I’ve created along the way. I expect this to be a beneficial exercise for me as I absolutely have things I want to refactor, and nothing motivates that like having other eyes on your code 🙂

Is this only applicable to Windows apps?

Nope! Or at least, some of the content will apply anywhere. Some posts will apply to any JavaScript app, others I expect will apply to any app at all. But for now the focus will be on the apps I’ve developed for Windows, and that’s the perspective from which I’ll be writing for the time being. Hence the #WinApps tag.

What about other aspects like design?

I’ve been doing the end-to-end design, development, and marketing of these apps, and I plan to share some of my experiences in each of these areas. That said, I’m not trained as a designer (or marketer, PR person, etc) and mostly picked up bits and pieces from working alongside some greats during my time at Microsoft. And of course learning by doing and iterating over the last several months.

I am proud to say my recent app designs have received a lot of praise, so I have some hope that my insights here will be helpful to some. At the very least I can hope to spark conversations with some actual designers!

How often will you post?

I’m going to try and share something every few days over the next several weeks. After I hit publish on this introduction I’m going to jump right into my first topic, which seems to be a popular one to discuss this afternoon 🙂

How do I follow along?

If you subscribe to this blog, you’re set! I’ll also tweet links to my posts so you can follow @BrandonLive. I’ll use the hashtag #WinApps which we used for a tweet-up a couple months ago (and which I hope we’ll be using for future events soon), so you can also watch for that.

Please be sure to comment! Questions, feedback, corrections, other perspectives… all are welcome and encouraged!

Now I’m off to write the first real post…

Dec 16 13

Tweetium – A new Twitter client for Windows 8

by Brandon

On Friday the latest B-side Software creation hit the Windows Store. It’s a new Twitter app which I initially started to build because I was frustrated with the official client and found the third-party alternatives severely lacking for my needs.

To learn more about it, visit the Tweetium website, or go straight to Tweetium in the Windows Store.

Richard Hay of Windows Observer did a little write-up about the release and posted this handy video walkthrough of v1.0:

The first app update is already awaiting store approval (list of fixes and new features here).

If you’re a Twitter user on Windows 8.1 (or Windows RT), check it out and let me know what you think!

Nov 2 13

Why I’m voting against I-522

by Brandon

If you don’t live in Washington state, you might be unaware of this year’s controversial ballot initiative, I-522. In short, it’s a law which proposes that some foods deemed as “genetically engineered” or containing “genetically modified organisms” must be labeled as such in grocery stores.

I first became aware of this initiative two or three months ago via a sign posted at my local PCC where I buy most of my groceries. It was endorsing the initiative. My first thought was, “Oh, I haven’t heard about a battle over genetically engineered/modified foods, but if there is one, labeling seems like it could be a reasonable compromise.” A moment later though, I thought, “What do they mean by genetically engineered? I hope the actual initiative is very specific about what’s labeled and why! It would be silly if every edible banana needed a label just because edible bananas don’t exist in nature.”

That was about the end of my thinking on the subject for quite a while. Then a month or so ago I began hearing more discussion about it, and seeing a lot of propaganda – all in support of the initiative. And all sponsored by PCC or Whole Foods. Finally, I saw an emotionally charged tweet with the hashtag #LabelGMOs and I replied to inquire why the tweeter considered this to be an important issue.

I decided to do some research to learn about the nature of the initiative and the facts and arguments applicable to the discussion. At first blush, I was unable to find anything to support the initiative. Well, nothing other than myths and emotional diatribes from non-experts with no sources to back up any of some fairly outlandish claims. On the opposing side I immediately found well-stated, sourced, logical objections from well-respected groups such as the American Association for the Advancement of Science who say, “Legally Mandating GM Food Labels Could ‘Mislead and Falsely Alarm Consumers’,” and the American Medical Association (PDF).

The AMA report’s conclusion is:

Despite strong consumer interest in mandatory labeling of bioengineered foods, the FDA’s science-based labeling policies do not support special labeling without evidence of material differences between bioengineered foods and their traditional counterparts.  The Council supports this science-based approach, and believes that thorough pre-market safety assessment and the FDA’s requirement that any material difference between bioengineered foods and their traditional counterparts be disclosed in labeling, are effective in ensuring the safety of bioengineered food.   To better characterize the potential harms of bioengineered foods, the Council believes that per- market safety assessment should shift from a voluntary notification process to a mandatory requirement.  The Council notes that consumers wishing to choose foods without bioengineered ingredients may do so by purchasing those that are labeled “USDA Organic.”

Someone supporting the initiative then sent me a link to the World Health Organization’s FAQ on the issue, apparently without reading it. It echoes the sentiments of the AAAS and AMA. As do the European equivalents of these organizations, such as the OECD and EFSA. So what gives?

My next step was to look at the text of the initiative itself. Oddly enough, it seems many of the big sites promoting I-522 have links to read the text which are broken. Fortunately you can get the full thing from the WA Secretary of State site in PDF form here. As I read the text I was overwhelmed with incredulity, and here’s why:

1) It provides no context for its definition of “genetic engineering”

522 immediately begins throwing around the phrase “genetic engineering” with absolutely no effort to establish the scope of its meaning. As any biologist will tell you, generic engineering is not new and can be accomplished through a variety of means going back thousands of years. The earliest banana cultivars are believed to have been created between 5000 and 8000 BCE. The bananas and many other plants you and I eat today are hybrids which are initially sterile and then made fertile using polyploidization. Neat stuff, but not newsworthy.

Of course, most anti-GMO advocates will immediately cry that GE techniques like hybridization and artificial selection are not covered by their definition of “genetically engineered” or “genetic modification.” But the fact that the initiative makes countless claims using this term before making any effort to establish what it applies to is incredibly disconcerting.

Later, 522 establishes a definition of “genetically engineered” that is more specific. However, it gives no context regarding other forms of genetic engineering that exist, nor does it provide any basis for choosing the particular scope it has selected.

2) It cites no sources.

The initiative says:

Polls consistently show that the vast majority of the public, typically more than ninety percent, wants to know if their food was produced using genetic engineering.

Which polls? What question was actually asked and which options were given as answers? Why is this an argument for labeling? Everyone knows you can get ridiculous poll results by asking biased and uneducated groups questions which they’re ill-equipped to answer. Keep in mind that many polls also show nearly 90% of Americans don’t “believe in” evolution. You know, the thing that makes genetic engineering possible.

It also says:

United States government scientists have stated that the artificial insertion of genetic material into plants, a technique unique to genetic engineering, can cause a variety of significant problems with plant foods.  Such genetic engineering can increase the levels of known toxicants in foods and introduce new toxicants and health concerns.”

Which scientists? Where did they say this? I scoured the web and literary databases and can’t find anything which looks like it would fit this description. How are they allowed to make claims like this without a single source? There are more examples of this which are just as outrageous.

3) The labeling it prescribes is not useful.

The initiative explicitly says that is does not require the identification of which components were genetically engineered nor how they were engineered. The only plausible argument for labeling which I’ve heard thus far is that “If a health issue is ever found in a GMO product, the label will help us avoid them.” But that’s not true, any more than a “grown with pesticides” label would help deal with an issue attributed to a specific pesticide.

This is because every GMO (even using the narrow definition the initiative eventually establishes) is completely unlike every other. There is absolutely nothing in common between them. All it says is that some modification is made. There are dozens of unique modifications in widespread use today, and undoubtedly new ones will come in time. If you really want to “know what’s in your food,” you’ll at least need a label telling you what was modified and which modification(s) were used.

4) It’s full of exemptions with no justification for their inclusion.

The bill exempts animal products (such as dairy and meat/poultry) where the animal was fed or injected with GE food or drugs. It exempts cheese, yogurt, and baked goods using GE enzymes. It exempts wine (in fact, all alcoholic beverages). It exempts all ready-to-eat food, such as the hot food bars at PCC and Whole Foods. It exempts all “medical food” (without defining the term). Particularly odd for a bill which claims to be looking out for health concerns.

So who wrote this thing anyway?

The author of I-522 is Chris McManus, an advertising executive from Tacoma.  When asked about the details of the bill, he told Seattle Weekly:

“Well, you know, I’m not a scientist.  I work in media. Those kinds of questions I’ll have to defer to later in the campaign.”  (source)

How encouraging!

What does the science actually say?

My next task was to dig into actual scientific literature to see where the claimed health concerns were coming from, and to better understand the processes being used and research into their effects. Here’s what I found:

1) Currently marketed GMO products are safe

As the AMA, WHO, and others I linked to earlier called out, all research into existing GMO products has shown them to be as safe as their conventional counterparts. There have been zero cases of a health issue attributed to the presence of GMOs in food. (Source: Meta-study by Herman and Price)

2) Substantive equivalence is a useful tool

Government standards in the US and around the world require GMO products to establish “substantive equivalency” with their conventional counterparts and to identify and thoroughly test the effects of any deviation from this standard.

This is not to say that assessing the effects of GMOs is without challenges. However, researchers such as Harry Kuiper point out that many conventional foods contain some degree of toxic or carcinogenic chemicals and thus our existing diets have not been proven to be safe. This does mean that changes due to genetic modification could increase the presence of as-yet-unidentified natural toxins or anti-nutrients. However, this also means that positive changes can be missed just as easily. Bt Corn, for example, has been found to have lower levels of the fumonisins found in conventional corn (source).

More importantly, there is nothing anywhere to suggest that modern GMO techniques are more likely to result in such unintended and unidentifiable changes than traditional GE techniques or even “organic” cultivation. In fact…

3) Modern GMO techniques appear to be safer

More and more research is revealing that modern transgenic GMOs contain fewer unintended changes than result from traditional breeding and even environmental factors. (source, another source, another source).

Transgenics are GMOs produced by artificially transferring genes from a sexually incompatible species and tend to get the most attention from those objecting to genetic modification. Note that the term GMO also applies to cisgenic modifications (where the exact same methods are used to transfer genes from another of the same species or a sexually compatible one). I-522 makes no distinction between these concepts.

One reason GMO techniques appear to be safer is that they’re able to make more targeted changes. This illustration conveys a simplified view of these different GE methods:

561px-Breeding_transgenesis_cisgenesis.svg[1]

Image source

Why do PCC and Whole Foods support I-522?

Because they’re businesses, and I-522 will make them more money. This poses an issue for fans of these “lifestyle” brands, because they generally associate them with good, wholesome values they share. However, these stores already sell largely non-GMO products. In fact, there are multiple certifications used throughout these stores which guarantee an absence of GMO material. These include USDA Organic and the Non-GMO Project (an organization who will provide a non-GMO label for products matching their standard in exchange for a fee).

Thus far I’ve stuck to pure facts with sources to back them up. If I may indulge myself in one paragraph of speculation, I will ask you to consider the list of exemptions included in I-522 and ask yourself where they came from. Then look at the products these stores sell which do not have opt-in organic certifications. I find it unlikely that the near complete overlap of these lists is an accident. However, this is just speculation on my part. If there is any hard evidence that the bill was crafted specifically such that these stores would experience no changes, I would be very interested in seeing it.

Whether done intentionally or not, though, the end result is the same. These stores will not have to change the products they offer, and because I-522 specifically calls out that USDA Organic and Non-GMO Project certified products require no additional scrutiny, their prices will not be affected. However, the same is obviously not true for their competitors like Safeway and QFC. These stores will see price increases.

Why would this cause prices to increase?

Farmers who already use non-GMO products, but who currently do not opt into a certification program, will have to pay for that program to avoid having a scary label added to their wares. That means they’ll have to increase prices. The large swath of producers who do use GMO products, or who are unable to guarantee their absence, will not only have to suffer sales effects from a misleading label, but will have to incur the costs of adding it. It’s difficult to say who will be hit harder, but the fact that this law will add cost to the system and thus result in higher prices is impossible to deny.

But why is it bad to know more about what we’re eating?

All else being equal, it’s not. But as I alluded to early in this post, the proposed label doesn’t tell you anything about what you’re eating, and all else is not equal. The spectrum of possible labels we could put on food to “inform” buyers is endless. We could put a label saying a food includes ingredients from crops which were sprayed with any kind of pesticide. We could label products made with unfiltered water (which will undoubtedly be lavished with support from Brita and Pur!). Hell, we could label products harvested using red tractors (no commie wheat!). But we don’t do these things because mandatory labels mean cost and bureaucracy. Regulation of the food industry is crucial to health and safety, but to be effective it must be based on science and facts, not emotional reactions. Support for I-522 seems to be largely based on what Stephen Colbert calls “truthiness.” While some things may feel right to your gut instinct, that doesn’t mean they are.

Where can I learn more?

The links embedded throughout this post provide a wealth of information about the subject. However, they barely scratch the surface. Some additional resources you may find useful include:

Biofortified (an independent educational non-profit from Wisconsin) provides a really great in-depth breakdown of I-522 and the associated controversy.

Wikipedia has a large page dedicated to the controversies around GMOs. It includes a lot of great summarization of the science and history of these issues, and is a great hub for finding references pertaining to all aspects of these issues.

There are numerous meta-studies which compile and assess the results of research spanning the peer-reviewed scientific literature. One example I linked to earlier is Herman and Price’s paper, Unintended Compositional Changes in Genetically Modified (GM) Crops: 20 Years of Research.

Sep 4 13

Did the Surface write-off cause Microsoft to buy Nokia?

by Brandon

I’ve read a lot of speculation the past couple days about why Microsoft went ahead and pulled the trigger on the Nokia acquisition. Some say that Nokia was in a worse financial situation than most believed. Others say that they were threatening to switch to Android. Still others posit that this was the plan all along, ever since Elop took the reins at Nokia.

What follows are my own thoughts about what may have led to the buyout. But first, I want to address the question of whether this was Microsoft’s “master plan” all along. I would say no. I don’t have any inside information on this, but I do think my experience at Microsoft affords me a somewhat different perspective from which to speculate. That in mind, here’s what I think the senior leadership may have been thinking when the Nokia arrangement first came about:

First, Microsoft looked at the marketplace and saw two things: Android taking the largest piece of the mobile pie with a variety of OEMs (following a particularly Windows-ish model), and Apple taking the smaller but most profitable chunk, just as they had been doing to the PC market with the Mac for several years. Some things to note are that a lot of Microsoft folks had developed a lot of respect for Apple’s emphasis on design and polished experiences, and a not-so-small bit of envy for the margins those things afforded them. Android, on the other hand, was (and I think often still is) looked down upon by Microsoft from both the design and engineering side of things.

Keep in mind, we’re talking about opinions of Android formed 2-3 years ago. Based on that, I think they decided that Android was a total mess. They looked at it and saw poor OS performance (particularly with the abysmal scrolling performance seen on anything but the beefiest hardware), battery life problems (my Droid Incredible might as well have been a landline phone), fragmentation, OEM crapware and lame skins, and a massive and growing malware problem… I remember jokes at one point about how ARM chip manufacturers like Nvidia had to start including extra shadow cores on their chips to work around architectural flaws in the Android core OS. Surely, the thinking went, Microsoft could beat them handily in engineering and design.

But just beating Android at their game wasn’t enough. Microsoft wanted it all. Or rather, they wanted in on both parties: the mass market OEM-driven game that Android was dominating, and the integrated end-to-end device experience from which Apple was raking in huge bucks.

It’s easy to see how this was meant to play out on the tablet side. Forget Windows 8 on the desktop, it would take that naturally, and its main “competitor” is Windows 7. If you think Microsoft was ever scared that Windows 7 would be “the next XP” on desktop or even traditional desktops, you’re not thinking clearly. Folks there would sleep perfectly well at night were that to be the case.

The real point of Windows 8 was to take on Android tablets. It gets thrown over the wall to the OEMs and they build hundreds of different machines at every price point, sometimes with crapware and weird skins or “value adds” thrown in, and if they could succeed doing that with Android then they could succeed doing it with Windows 8.

Surface, on the other hand, was to fight on a different front. The base model (“RT” – ugh), would take on the iPad. The Surface Pro would go up against the MacBook Air. In this brave new world, the OEMs might be agitated, but they’d largely be unaffected and get over it. They weren’t showing any signs of being able to compete with Apple for that part of the market anyway, so why should the care if Microsoft takes a stab at it?

Windows RT, then, was sort of an experiment, and also a contingency. As several publications have reported, Windows RT OEMs had to be selected by the chip manufacturers, and “slots” for Windows RT OEMS were very limited. The idea was that Windows RT devices would be better (aka “more Apple-like”) than Windows 8 tablets where Microsoft’s influence was very limited. And if one of them somehow did a better job taking on Apple than Surface, that’d be perfectly fine.

Tangent: Funnily enough, I got the impression that some nameless unselected OEMs made a big stink before it was even released about “deciding” not to make Windows RT devices (much in the same way that I’ve “decided” not to date Jennifer Lawrence). At the time I thought they were just being spiteful and/or cute. Or that maybe I’d been wrong (I was never privy to the actual list of invited OEMs). But in retrospect they were fortunate enough to come off looking smart, given the lackluster performance of Windows RT thus far, and Microsoft not so much.

So back to Nokia and Surface. I suspect, with no evidence at all, that a similar plot was afoot for phones. In fact, what I think Microsoft really wanted from Nokia in the beginning was Windows Phone’s version of the Motorola Droid. A breakthrough device that gets the platform off the ground. With that, they could get other OEMs on-board in earnest, and start taking a bite out of Android’s sizable market share. Meanwhile, they’d prepare to unleash the power of their fully armed and operational hardware battle station, and take on the iPhone with their own thing. I don’t think of this as a use-them-and-lose-them play regarding Nokia. Rather, they wanted Nokia to succeed, but expected there would still be room at the top for a first-party thing down the road, much as they’d hoped things would work out with both Windows 8 and Surface.

So what changed?

Well, first, Nokia has yet to build that breakthrough Droid offering. Personally, I don’t see how they were supposed to do that. The success of the Motorola Droid was Verizon’s doing, not Google or Motorola’s. Scratch that, the real credit likely lies with Apple. And Microsoft. Apple, for maintaining exclusivity with AT&T, and Microsoft for failing to provide Verizon with an alternative to go all-in on. If there was ever hope of that happening, Kin burned that bridge and I think it’s still smoldering. So Google and Motorola gave it their best, which really was pretty terrible next to an iPhone, and Verizon took it and made it a household name. Apple either didn’t see this happening, didn’t care, or just couldn’t get out of their AT&T deal in time. When they finally did, Android had already hit critical mass, and Microsoft was still crapping out dead-end Windows Mobile 6.x turds that nobody wanted.

So how could Nokia do that? Well, the camera emphasis is a valiant effort. It’s been effective, but not to the degree that it really needs to be. So Nokia left to its own devices (slight pun intended) has not lived up to Microsoft’s hopes.

Secondly, Samsung. I think Microsoft has come to the realization that Samsung isn’t really an Android OEM any longer. They’re building their own ecosystem, and when the opportunity arises, they’re poised to fork Android or dump it altogether. They’re already not far off from Amazon’s level of Android bastardization. This suggests the OEM-model is less sustainable than Microsoft previously thought.

And finally, what may have tipped the scales: Surface failed to take off. Going into it, Microsoft had an endless amount of faith and confidence in its ability to build up an entire Apple almost overnight. I think the board likely drank that Kool-Aid (as so many of us did) and when success failed to materialize, they had a bit of a reality check. I can’t help but think that this influenced the timing of the Nokia acquisition.

I suspect Ballmer and the board had their finger on the “acquire” button all along, mainly to ward off a competitive acquisition or to save Nokia from drowning and leaving Microsoft with no one to throw a Windows Phone-shaped life preserver to. A year ago, saving Nokia may have seemed less crucial, had it come to that. After all, Microsoft could just build its own hardware and forget the whole software licensing side of things altogether if it needed to (unlike the PC market and big Windows, they really had nothing to lose by doing this). I don’t think it was plan A. Maybe B or C. But in light of the Surface write-down, I think that contingency suddenly seemed a whole lot more risky, and that finger over the acquire button got twitchy.

Aug 14 13

Surface RT 2: What will they do, what should they do

by Brandon

Kevin C. Tofel over at GigaOM just posted his suggestions for how Microsoft could turn the next Surface RT into a winner. He makes some good points. Here’s my take.

What I expect

Hardware

Microsoft (and Nvidia, and even Qualcomm) have all hinted or straight-up said they’re working on a second generation of Surface hardware. Let me be ultra clear: When I began my leave before resigning from Microsoft, I was not yet privy to any hardware plans, so absolutely anything I say on the matter is purely speculation. And who doesn’t enjoy a bit of that?

Some things are just obvious. Nvidia’s CEO said they’re working on a new Surface RT. Instead of the already-then-obsolete Tegra 3 that held back the first generation device (both in performance and screen resolution), it’s pretty obvious that some variant of the Tegra 4 will be arriving in a Microsoft tablet. It also doesn’t seem a stretch to expect such a thing to arrive with Windows 8.1 on or around October 18th. Again, just stating what seems obvious based on reading the same reports everyone else has.

I’ll limit my further hardware speculation to saying four things:

  1. Display. I’d be surprised if the second-gen Surface RT did not have a 1080p display. Kevin thinks this is unnecessary, or not worth pushing the price up to accommodate. I have to disagree here. Going higher than 1080p seems unlikely, and the returns are clearly diminished after that point on a 10.6″ display. But the difference between text on a first-gen RT and a Pro is stark. As well, the number of reviews that dinged the original model for its comparatively low resolution display suggests it would be unwise to ship a follow-up a year later without addressing that.
  2. Price. As much as I would love to get my hands on one of those for the $299 price point Kevin suggests they aim for, I just don’t see it happening. Not on a 10.6″ model. Instead, I expect prices to be in line with the original launch price of last year’s RT. Maybe $100 cheaper, or with some kind of keyboard cover / accessory bundle. A better deal than that is certainly possible, but I just don’t see it happening.
  3. Hardware design. I don’t expect this to change in any drastic way. I do think they’re perfectionists though, and they will have at least tried to address any nits people had with the first model.
  4. Smaller, cheaper model. This just seems inevitable.

That’s what I think they’ll do (on the ARM side of things anyway), and for the most part I think it’s the right set of things. As Kevin says, the Surface hardware is actually already very nice. It’s just a little underpowered, especially for the price it launched at. That’s easy to fix. A boosted screen resolution and some other tweaks should help shore up its review scores. That leaves the real challenge (and where I think they’ll struggle most) to the product positioning, marketing, and software.

App ecosystem

You’ll hear a lot about how the app situation has improved drastically from the Win 8 launch, but virtually everyone is still going to say “but it’s still not where it needs to be.” Oh right, forgot the spoiler alert. Sorry if I just ruined every Windows 8.1 review for you 🙂

Office and Desktop

They added Outlook on the desktop for Windows RT. This betrays the fact that they still don’t have a “Metro” version of Office ready and that the desktop is still around. On one hand, this is obviously a shame. On the other, it seems at least some folks were asking for it. So yeah, if Outlook is the one desktop app you can’t live without, 8.1 will save the day.

Every review is still going to talk about how confusing the desktop is on a tablet (and more so than ever on a smaller version), and they’re all going to say a keyboard cover is a must to make sense of this, brining the effective price up substantially. Which leads me to…

What I would do

Imagining a world where I’d run everything from the beginning or could go back in time to last year and make things right can be a fun exercise, but isn’t particularly helpful to anyone. So instead, let’s assume I were in charge of what Windows and Surface did starting today, working with what I assume they have based on what I said above. Here’s what I’d do.

Naming

I’d call the thing Surface 2. No RT, please. The 2 stands for “we fixed it.” Assuming there’s a little one, that’s the Surface Mini. Maybe someone thinks naming like this is too easy and they won’t be earning their paycheck by picking the obvious answer. Trust me, simple is your salvation. In this hypothetical world where I’m in charge, there’s an important rule. For each extra word, you’re fired. And for Pete’s sake, don’t futz with it once you launch. We’ve all seen enough Windows Phone 7 Series, Surface with Windows RT, Surface Windows RT identity crises to make our heads spin. You can’t hotfix a brand. Seriously, the way you MS branding guys swerve, I sometimes want to give you a sobriety test.

Don’t put the Windows RT name anywhere. If you have time, replace it. If anyone asks, this is now Surface OS. It’s like Windows 8, but it’s not backward compatible. Done. You’re welcome.

Office and Desktop

Windows dev team, you have one more DCR (for the uninitiated – a last minute feature). Hide the Office and all other Desktop tiles by default on Windows RT. Or at least on MS-branded RT devices (err, maybe those are the same thing now – good thing, too, now that you’ve followed my earlier order to rename it Surface OS). At a minimum, don’t have any pinned on Start. But really, just hide them all, from everywhere. I don’t care if power users can get there via search or task manager or what have you. Just bury it.

Notice how I didn’t say remove Office? Yeah, there’s a reason for that. You’re going to take my advice from the last post and from now on, Office is “included” with the Touch or Type Cover. Make some pretty retail boxes that scream “This box has Office and a sweet keyboard cover! That’s totally worth $100!”

You plug one of those in, and bam, Office (and any other desktop stuff you can’t bear to leave buried) appears. It was always on the drive (shhhh), you just need to connect a keyboard to unlock it. From then on, it’s there, even when the keyboard is detached. You could hide it when detached, but that’d just be an annoying limitation and add complexity to the implementation. You’re shipping in two months, so let’s keep this simple. Maybe other users still have desktop things hidden until they connect it at least once. Keep things simpler for the kiddos or something. Whatever. That’s just details.

See what that did? Now you have Surface and it’s a pure, uncompromised iPad competitor. All that work to make Windows 8.1’s PC Settings app complete can actually pay off. Many people will buy this and use the new stuff and they’ll be happy with their Microsoft iPad.

Some will buy it and be sold on the idea that they can add a keyboard and Office later. They’ll take their Surface 2 home, enjoy it for a while, and eventually think “Man, I never even use my old laptop any more, except for Office. I should go buy that Office for Surface and keyboard thingy.” Others will buy both up front, as I bet most do today, and that’s awesome.  But now you’ve actually got a contender for the “I just want a tablet” market. Whereas last year every review (and probably every salesperson) said “to really use the Surface, you need the keyboard. Office is just useless without it,” instead people can be comfortable buying (and selling) it as a tablet. And they’ll no longer feel like they have to spend an extra $100+ to make their $500 device useful. That’s a good thing. Because as it was, most of them weren’t going for it.

In-store marketing

You’ve got some great attack ads going on now which pit the iPad against the Surface (and other Windows 8/RT devices). That’s great. But you know what would be better? Do that in the Store. In fact, here’s what I’d do:

Above the Surface display at Best Buy, put a little flyer that compares it against the iPad (and maybe an Android thing). Have a picture of each, with the Surface clearly looking more awesome, and then have the usual point-by-point checklist. For example:

Feature iPad Surface 2
Fast perfomance x x
High res screen x x
Light + thin x x
Battery ~10 hours ~10 hours
Kick stand x
Keyboard covers Sold separately
Office 2013 Separately (w/ cover)
Facebook, Rhapsody,
FlipBoard, Angry Birds, etc.
x x
Do two things at once x
USB port x
Price $499 $499

(All values in this table are entirely fictional for illustrative purposes, and based on nothing but the speculation in the first part of this post)

Then, assuming you make one, do the same with the Surface Mini and iPad Mini.

Then, and this is important, do it with the Pro and the MacBook Air.

Now people know which products to compare. Surface vs iPad. Pro vs MBA. People will get this just by seeing those pictures side-by-side, and they’ll have expectations you’re better able to meet.

Office and the Pro

In my last post I described how the fact that the Surface RT comes with Office, and the more expensive “Pro” model does not, is the most confusing thing to happen in the history of the universe. Seriously, if you want to make a Best Buy customer’s head explode, just try explaining this. Or if you want to torture a sales person, ask about the RT, then the Pro, and say these magic words: “Cool, so they both come with Office?” Fix this. I don’t care if this means sucking it up and including Home & Student on the Pro, or including a year of Office 365. I can imagine a sales person convincing you that the 1 year subscription with the Pro is better (you can use it on up to 4 other PCs, too!), and a year seems like a long time to not have to worry about it. Right now it’s got a trial or something. Not acceptable.

If you did what I said earlier and made Office part of the keyboard purchase, then you’ve sort of solved this problem. Maybe the keyboard unlocks a preinstalled copy of “Office 2013 Home and Student for Touch PCs” (someone just got fired 6 times) just like on the other (“RT”) models. Maybe it pops up a window that lets you redeem 1 free year of O365. You guys are smart. Make that work.

Apps

Proudly feature awesome Windows exclusives like this one and help the developers promote them. I’ll consider it a thank you for all the free advice 😉

Aug 6 13

Monday morning quarterbacking: Surface RT Debacle Edition

by Brandon

Hal Berenson regularly shares some of the greatest insights into Microsoft that I see appear around the web. His latest post, Fixation on Margins: The Surface RT Debacle Edition, does a great job conveying both the ambition of Surface RT and its failure to achieve its lofty goals. That inspired me to share some thoughts I’ve been having for a while (and occasionally tweeting about), regarding where Microsoft went wrong with Surface. This isn’t an “I told you so,” (because I didn’t, and it wasn’t my job to). It’s not even a “what I would have done,” though maybe a “what I’d like to think I would have done.” 🙂

In Hal’s post, I particularly like his invocation of Maslow’s hierarchy of needs in describing the value of Office on a tablet. I agree 100% that Office is a highly valued, worthwhile differentiator for tablets. That is, I am certain that a very sizable percentage of iPad and Android tablet users would consider Office a desirable addition to their tablet experience. What they won’t say is that Office is so worth having that they would sacrifice other things they love about those tablets. In particular, the apps they cherish most.

I don’t think any of this is news to the folks behind the Surface and Windows RT efforts. In fact, achieving that first level of “Maslow’s tablet hierarchy” (or maybe that should be Berenson’s tablet hierarchy?) was of the utmost priority for Microsoft. This was entirely the purview of the Windows team (and by way of partnership, DevDiv). Just some of the efforts they/we undertook to achieve that are obvious:

  • One store for all Windows 8 (and RT) devices, which enabled all the developer marketing around reaching “hundreds of millions of users” within the first year basically by default.
  • The better-than-them store deal, where developers retain 80% of their sales rather than 70%, after exceeding the $25,000 threshold.
  • A huge investment in tools for Windows 8 apps, along with the multi-pronged platform approach designed to appeal to three groups:
    • Web developers, with the first-class JS/HTML5 platform, new JS library, and kick ass tools.
    • XAML / .NET developers, with the WinRT XAML platform and familiar tools.
    • Native developers, by finally (finally) providing them with a modern UI framework (XAML), investing in C++ 11, WRL library, CX extensions, and long overdue tooling updates for native C++ developers. Oh, and continuing to invest in DirectX.
  • Restricting many of the new platform investments to Metro applications (though I’m sure other factors around time/resources contributed to this).
  • Booting to the start screen, setting “Metro” apps as the default file association handlers on all PC/device form factors, and generally treating the desktop as the “classic” environment, much in the way that Windows 95 treated DOS apps.

That’s not to say that this was the only motivation for some of those decisions (particularly the latter ones). I can attest to the fact that those in charge of those later UX decisions were “true believers” who felt each of those decisions was not just supported, but often mandated, by the Metro-inspired design principles they’d adopted at the outset. You can call it over-confidence, or even arrogance, or an over-correction at the sight of Apple’s success with what seemed to be “the Jobs/Ive way or the highway.” But from my perspective I saw no reason to blame those sort of decisions on out-of-touch executives or inherent organizational flaws, and certainly not on incompetence by the front-line engineers or designers.

What I might attribute to organizational or leadership failings is the lack of cross-group coordination, particularly in the marketing and basic product development areas. A quick note on my perspective for those who don’t know me: I was a senior IC developer on the Windows 8 User Experience team, who left the company earlier this year for a variety of reasons, but mostly to scratch an entrepreneurial itch that my 8 years at Microsoft had left unscratched. So you could say that as an engineer it’s easier for me to criticize these “farther away” disciplines. You might be right.

The prime example for me, though, is the fact that you’ve probably heard of Windows RT. From my perspective, that name was chosen because it didn’t matter. People don’t buy tablets based on operating systems, the thinking went. They buy tablets. That is, you don’t buy an iOS device, you buy an iPad. You don’t buy an Android tablet, you buy a Galaxy or a Nexus.

You can see how this could have extended to the Surface. I wasn’t in the marketing or product marketing part of the organization, but my sneaking suspicion is that on the Windows side, this is how the naming of Windows RT came about. Surface would be the brand (or in the case of Samsung it’d be something like Ativ, Yoga for Lenovo, Vivo for Asus, etc). It made sense to me, anyway. I never expected to have to explain Windows RT to anyone. I was, however, quite prepared to say “Surface runs new Windows 8 apps and comes with the new version of Office, but isn’t backward compatible with older versions of Windows.” I figured normal humans would grasp that concept. The last couple decades have taught people the “backward compatible” concept. If they didn’t get that, you could just make Ballmer cringe and say “it’s a tablet, not a full computer.” Wrong or right, I bet a lot of people would understand what you meant. So yeah, simple message: Windows 8 is backward compatible, but Surface is not. It’s Microsoft’s iPad. It’s super well made, priced competitively, and runs tablet apps. But it also runs Office, has a kickstand, and you can get these cool keyboard covers. Awesome, right?

Sure, it still had the apps thing to overcome. But everyone has to overcome that. Apple did. Yes, Apple had phone apps on the iPad at launch, but those were next to useless and honestly such a poor experience (particularly for non-games) that I’m still shocked Apple actually did it. Anyway, there were a lot of reasons Microsoft and the Windows team thought this would be fine. The aforementioned message to developers was expected to be super compelling. The fact that tablet users spend the vast majority of their time in the web browser was another (and I still think Metro IE on Win8 is the best tablet browsing experience bar none). The plethora of Microsoft-made apps was also expected to play a big part in satiating early adopters until the ecosystem began to thrive on its own.

And so, the story of Surface made sense to me. Then the announcement came, and I learned of the Surface Pro. Cool! Right? But wait, now the message got complicated. “Surface is the cool tablet that runs new stuff and Office, but isn’t backward compatible. Surface Pro runs old stuff, but it doesn’t come with Office. It’s more expensive, but it’s not as good at tablet basics like being small, always-on, and having a long-lasting battery.” Phew. That’s a mouthful, and probably takes a few repetitions to sink in.

No no, it isn’t that bad, I thought. The Pro is going to be more of a niche thing, and not even there at the start. It’s designed for people who were wiling to pay more and compromise their tablet for something fast, higher resolution, with an active digitizer pen. So maybe I wouldn’t have to tell most people about it. Even if they saw the Pro at the store, they’d see the price tag and the added heft and those outside the target niche would move along. My earlier “Surface is a tablet with Office but isn’t backward compatible” explanation would still suffice for the general case. Or so I hoped.

You know, odd as it may sound, I initially thought that compatibility with “classic” Windows apps on the Pro was an extra bonus. A footnote, more than a selling point. Unless you’re a developer, maybe. Or part of an even smaller niche who thought it’d be the perfect device if it just ran Photoshop. In retrospect, I can’t help but wonder if this “feature” may have won over a tree at the expense of the forest… But I’ll get to that in a moment 🙂

Compatibility differences aside, the other big complicating factor the Pro introduced was the “Oh, the expensive one doesn’t include Office but the cheap one does” conundrum. I’ve heard several Best Buy associates try valiantly to explain this, and every time I can’t help but feel sorry for them. It reminds me of trying to explain the Windows Vista SKU model. It’s one of those things that can only make sense from the perspective of a product development person assuming that it’s someone else’s job to make sense of it (i.e., “marketing will explain it to consumers”). Sadly, most of those explanations ended with a bewildered customer walking away from the Surface displays, presumably to find something that made their brain hurt less.

So with all that in mind, and the substantial benefit of hindsight, here are ways I think the messaging could have been kept simple:

  • By not releasing the Surface Pro at all, at least the first year when establishing the brand and expectations about it.
  • Make it not available at retail but instead just a development device for Surface. “Surface Developer Edition” or something. Would’ve sold out Build faster I bet 🙂
  • Just plain not use Surface in the name of the Pro, embracing the “laptop first” model it better fits in. Two clear messages instead of one muddled one.
  • Or… brace yourself for this one. Are you sitting down? Good.
    Ship the Pro with an Intel version of Windows RT. No classic desktop apps here. Blasphemy? I don’t think so. Maybe make an exception for VS (or have “developer edition” variant), because Surface + fast+ Office + VS = awesome dev device. But otherwise it’s the niche power-tablet. Still a tablet. Still a Surface with the same, simple message. Just made for people who are okay sacrificing $$, battery, and weight, to have tomorrow’s performance today. If people will buy giant 8-core phones, I could believe there’s a solid niche willing to make this sacrifice.

You sell/give some of those “Intel RT”-running Pro devices to developers, with only Office and dev tools on the desktop, and I bet some of them build apps for it just because they want to use them. I’d bet money you’d at least have a Bit Torrent client by now.

While I’m Monday morning quarterbacking the whole thing, I’ll share another idea I had a little while back: Don’t include Office with the Surface. Include it with the cover.

The initial Surface RT price may have seemed ambitious, but the price of the Touch and Type covers are what really hurt. Maybe if the Surface price had been something you could call cheap, then the cover price could be “where they get ya.” In my experience, people are okay with things like that. They just feel smart for recognizing it. But with the Surface RT, you were already paying at least what Apple charged just for the tablet. If you’re paying someone the same thing Apple charges, they’ve already “got” you.

But let’s say they offered the Surface RT at the iPad competitive price, and convinced you that it was a good tablet and that your favorite apps, if not there, would be coming soon. You’re interested, and weighing the pretty tiles and kickstand against the iPad you saw at the Apple store. Then the salesperson says, “Best of all, for just $120 you can have Office and this awesome keyboard cover, so you might not even need your laptop anymore.” Well I don’t know about you, but that almost sounds like a deal!

The other big leadership failing that happened somewhere is almost too obvious to bother mentioning. The lack of proper “Metro” versions of Office is inexcusable. Hell, they could’ve jammed the Office Web Apps into Win8 apps (remember that awesome JS/HTML platform designed specifically so that people could do things like this? Apparently someone important didn’t). Even if they intended to replace them a year or two later, you can’t convince me that they couldn’t have had something compelling there. Then you could’ve had RT sans desktop. Or at least without it pinned by default, left in as “debug/admin mode” the way command prompts have been for ages now.

All of those things said, when I look at the future of Windows and Surface, I don’t see doom and gloom. I don’t think any of the above “things I would’ve done” will happen, or even need to happen. For the most part, we’re talking about ships which have long sailed. Things is, I see this whole “Surface Debacle” as a stumble. Could that lead to them falling flat on their face? Maybe. But if there’s a tech company that knows how to power through a stumble and come out laughing, its name starts with an M and ends with “icrosoft.”

Jul 10 13

Stop saying mobile web apps are slow

by Brandon

This morning I came across a post by Drew Crawford entitled “Why mobile web apps are slow.” It’s an interesting read, with some good information included. But what struck me is that the article content doesn’t really address the topic of the title.

Why do I say that? Well, the entire (rather lengthy) post is about JavaScript. On iOS. In fact, the main points seem to be:

1) JavaScript (whether JIT’d or interpreted) is, as Drew claims, “5 times” slower than compiled native code.

2) ARM is slower than x86. Drew says “10 times slower.”

First, I’d like to address those points. Then I’ll dive into my thoughts on performance of mobile web apps.

Is JavaScript slower than native code?

Yes. Of course it is. Drew does mention that the gap has closed significantly over the years, but seems to suggest it has stopped closing and settled at the roughly “5x slower” mark he mentions. I don’t think this is quite right. I think core JavaScript JIT execution will continue to approach native compiled execution asymptotically. It won’t ever match it exactly, but it will continue to get better, at least so long as you have big players like Google and Microsoft pouring huge resources into doing just that.

How much slower is it today?

It depends. In fact, one of the challenges in answering why mobile web apps “are slow” is that the performance characteristics of each platform’s JavaScript engine varies significantly. The single largest factor that I see here is Apple’s decision to handicap web apps by running their JS engine in interpreted mode outside of Safari. So in some cases it’s not the language or even the runtime’s fault, but policy decisions by vendors who want to discourage its success. That’s a steeper hill to climb than any technology limitation!

Another factor is what you’re doing. A lot of JS operations already execute at near-native speeds on a good JIT engine (so, not in an iOS app). If you don’t do many allocations, you may never feel the impact of garbage collection. On the other hand, if you’re constantly creating new objects instead of reusing from an object pool, you may hit them frequently and performance will certainly be affected.

That said, is five times slower a reasonable approximation? Sure, why not. I think it’s wrong for a lot of cases, but I’m reasonably okay going with that for the purposes of this discussion.

Is ARM really 10 times slower than x86?

Of course not, but this is the wrong question. How about: Is the sort of mobile chip you find in a tablet or smartphone 10 times slower than the sort of chip you find in an Ultrabook? Perhaps.

ARM chip speeds vary hugely. They’re also affected by the type of workload you give them in drastically different ways from the Core i5 in the Surface Pro I’m typing this on. For example, the Tegra 3 in the Surface RT is a quad core chip and runs at roughly 1.3Ghz. Sort of. It can’t actually run four cores at that speed for very long, because it has a target power and thermal profile which won’t allow it. For short bursts it can use all that power, but for sustained work the actual operating frequency and number of active cores is constantly being throttled to allow the thing to run all day and not require a fan.

But the same is true of an Atom SoC. In fact, a lot of the learning that Intel has been doing lately has been along those lines. That is, building chips that can scale their power usage and thermal output a greal deal and very, very quickly. This is how you get the most bang for your buck (/watt/degree) in the mobile world.

This complicates comparisons between a mobile SoC and an ultrabook or desktop CPU. Combine this with the fact that ARM SoCs themselves vary greatly in computational power (i.e. the Tegra 3 is a relatively slow chip, especially when compared against the latest iPad chips). Some of those ARM chips outperform the latest Intel Atom SoCs significantly. Maybe not in all workloads, but surely in some. I don’t think Intel is going to magically change that with Bay Trail. Rather, I think they’ll have roughly comparable performance. Certainly they will not magically be 10x faster by virtue of being x86! Intel may gain some ground using its manufacturing muscle and ability to die-shrink faster, but that isn’t going to yield the sort of improvement Drew seems to expect. And I’m skeptical it will be enough for Intel to win over big ARM customers. I could expand on this, but my thoughts of the future of ARM and Intel SoCs is a topic for another post 🙂

Discussing mobile web app performance

Would it surprise you to know that JavaScript is just one, often less important, factor in the performance of most mobile web apps? Sure, if your “mobile web app” is just calculating digits of pi, then JavaScript is your main concern. Depending on your platform of choice, this may suffer significantly versus an equivalent native app. On iOS, it certainly will, because of aforementioned crippling of JS execution which Apple has chosen to impose on its customers.

But what about the majority of apps? For example, why did Facebook choose to abandon its “HTML-based” app and go native? I don’t think JavaScript performance alone is the answer. So the rest of this post will focus on the technical factors I think came into play in that decision (ignoring political, marketing, or skill-availability reasons).

My first step will be to define a few terms and scope the discussion. For the remainder of the post I will assume the following:

Mobile Web App refers to an installed application on a smartphone or tablet, where the bulk or entirety of the app code is written in JavaScript, and the UI is built using HTML and CSS  (perhaps with some canvas or SVG usage by way of the HTML engine).

Platforms in scope for this discussion are iOS, Windows 8, Windows Phone, and Android (where I have the least amount of knowledge to work from at this time).

Web runtime refers to the combination of a HTML and CSS parsers, the associated layout engine, rendering engine, DOM, and JavaScript engine. For example, WebKit+Nitro on iOS, Trident+Chakra on Windows.

Performance refers to the ability to provide the desired functionality with what users perceive to be a responsive, “fast and fluid” experience. In particular, one which is indistinguishable from an equivalent native application.

Challenges for mobile web app performance

Code and resource delivery

The simplest way to put a “web app” into an app store on a mobile platform is to simply wrap a web page in a WebView control. This is a common practice on every mobile platform, and really provides the worst case example for this sort of performance discussion. If your app’s code, markup, and/or resources (i.e. images) need to be downloaded before they can even begin being parsed or executed, then you’ve already lost a very important performance battle. Even if you assume ideal network conditions, and have condensed your first page download size to something very small (say, 30kb – pretty optimistic!), you almost certainly have a best-case latency in the 100-200ms range. That’s just to get your code on the box. And that’s assuming you have effective geo-distribution on your server-side, perhaps via a CDN. Realistically, you’re dealing with a larger download and you’re going to have a lot of users on slow cellular or coffee shop connections, where this latency can be measured in seconds.

It gets worse still if you take a naïve approach and make many HTTP requests, each with measurable overhead, versus bundling them effectively for your needs. Or have to hit a data center on the other side of the continent (or world).

If you’re going to compare a mobile web app to a “native” mobile app, you need to play fair. I don’t know if Facebook’s old app downloaded all or most of its HTML, CSS, and JS on launch, or what caching implementation it may have used, but you probably wouldn’t built a native app which downloads its compiled code or layout information from a web server. So why assume you have to build your “web app” that way?

Some platforms make building an app with installed HTML, CSS, and JS very easy and efficient. Okay, one platform does (hint: it’s the one I helped build). But on non-Windows platforms (or Windows Phone) you can certainly make it work. And if you’re serious about performance, you will.

Platform startup optimizations

Assuming your app’s package includes your code and at least the majority of your HTML, CSS, and static resources, your next challenge is optimizing what happens when the user clicks on your app’s icon or tile, especially the first time they do it. Each platform has its own optimizations for general app start-up. Whether it’s optimizing disk layout or prefetching commonly used app files, or providing splash screen / placeholder screenshots / mark-up based skeleton UIs / fast suspend and resume, each platform strives to make its native apps feel like they’re always ready to go.

Unfortunately, most platforms didn’t have web apps in mind when they designed these optimizations. Again, you see the difference between platforms which do and don’t optimize for this by contrasting iOS and Windows 8. On the latter, your app’s JavaScript is processed into bytecode and cached that way at install time. Further, components like SuperFetch are aware of web apps and know how to apply the appropriate optimizations to them.

The result of those platform optimizations mean that even on a particularly slow Tegra 3 machine like the Surface RT, web apps can start just as quickly as native C++ apps (and in my experience, slightly more quickly than .NET based apps). Yeah, they’re limited by the hardware (both CPU and I/O), but that “5x slower” margin Drew mentioned simply doesn’t exist there. They’re all equally fast on a fast chip or equally slow on a slow one 🙂

Threading model

The next challenge mobile web apps face is the threading model they’re subject to, which evolved from the very earliest days of the web and carries significant legacy with it.

In a traditional web runtime/browser environment, you have one “big” thread where nearly everything happens. The main exception is I/O, which (thankfully) has pretty much always been asynchronous in the web world. So your requests to the network or disk (whether via XHR, src tag for an iframe, image, or script tag, platform API, etc) will not block your UI. But what will? Pretty much everything else.

Not long ago, web browsers did all of these things on a single UI thread:

  • Markup parsing
  • Layout
  • DOM operations
  • Input processing
  • Rendering
  • All JavaScript execution
  • Garbage collection
  • Misc browser overhead (navigation work, building objects like script contexts, etc).
  • Probably a bunch more I can’t think of right now.

In addition to the introduction of JIT engines for JavaScript, some of the largest performance gains for web apps (and the web in general) over the last few years have come from parallelizing some of these tasks and offloading them from the UI thread.

But even without parallelization, there are optimizations that JavaScript developers can take to improve the responsiveness of their app. Mainly, keep in mind that any time your JS code is executing, your UI thread is unable to process messages or perform layout. One thing you can do to improve this situation is to break your work into chunks and yield in between. This may seem quaint to some (or familiar to anyone who’s used something like Application.DoEvents in .NET), but it’s a reality for web app development. Though as you’ll see below, it is becoming less critical for apps targeting modern web app platforms.

Input

The list in the previous section includes a bullet for “input processing.” Timely processing of input is critical to providing a responsive and fluid user experience. This has never been more true than in the world of touch computing. Delays that are imperceptible to mouse users can completely ruin your touch experience. And user expectations for operations like panning a page with their finger are exceedingly high.

To address this, mobile platforms go to great lengths to prioritize input, particularly touch input. I’m obviously most well-versed in how Windows does it. But I believe I understand the basics of Apple’s implementation, based on things I’ve read over the years and my own experiences using an iPad 1 and more recently an iPad Mini. However, I am very happy to learn additional details or corrections (or confirmations) for anything I say here about Apple’s implementation. So if you’re an expert on their input handling system, please comment below!

My understanding of iOS’s input system is that it’s message-based, and like traditional Windows app have been for ages, those messages are delivered to the app’s UI thread. To prioritize touch input, iOS uses a message filtering and prioritization system, where delivery of most message types is suspended when a touch input message (i.e. “finger down”) is received, and kept that way until the input operation has ended (“finger up”).

For a native developer, you get some control over this behavior, and as I understand it, you can decide which work your UI thread will and won’t do while the user is interacting with your UI. What I do not know is whether the system has a mechanism to interrupt your UI thread when a touch interaction happens, or whether there’s any support for handling input on a separate thread. Based on the behaviors I’ve seen from using apps (including Safari), I suspect the answer to both is no. Questions I see on StackOverflow about touch input processing on iOS support this. I could be wrong though, so feel free to correct me!

Unfortunately, for a web app developer targeting iOS, you don’t get direct access to this machinery. But the real problem is that any time your JavaScript code is running, the WebKit runtime cannot process touch events. This is a big problem, particularly for operations like panning a virtualized view.

Windows 8 solves this problem for common touch interactions (such as panning and zooming) by handling them on a separate thread. The technology enabling this is called DirectManipulation. As a web app developer, though, you don’t really need to be aware of “DManip” or how it works under the covers. All you need to know is that while your JavaScript is executing, the user can still pan or zoom your view. This same technology is used by both the web runtime and the XAML infrastructure, so in both cases you get it essentially for free. The result is what Windows developers call “independent manipulations” – because the end-to-end handling of the manipulation is executed independently of the UI thread.

I don’t know if other plaforms have similar mechanisms. What I have observed is that manipulations in Safari on iOS are obviously not fully independent, as the UI thread freezes while you’re panning a web page (and if you get to an unrendered part of the page with the checkerboard pattern, it will stay there until you release your finger). So if Safari doesn’t do independent panning, I think it’s unlikely an app using WebView could.

Animation and Composition

Of course, handling input processing on another thread isn’t that useful if you can’t update the view for the user in response to it. So Windows 8 makes uses of a separate composition and animation thread using a technology called DirectComposition. The web runtime on Windows uses this hand-in-hand with DirectManipulation to provide a completely off-UI-thread manipulation experience.

For other animations, you need to take care to stick to “independent animations” whenever possible. For Windows 8, this means you’re best off sticking to CSS3 animations and transitions over CSS3 transform properties (i.e. use translate(x, y) rather than animating the “left” and “top” properties). The built-in WinJS animations do a good (though in a few cases not ideal) job of using independent animations effectively, and for many apps they’re all you need.

For more about independent composition and animation in IE / Windows 8 apps, read this article on MSDN.

I suspect on other platforms you should follow similar rules to get the best animation performance, but I’d love to learn more from experts at iOS or Android development.

Multi-threading

As mentioned earlier, in the simple case, all your JavaScript runs on the UI thread. Unless you use Web Workers. Web Workers give you a lot of control that native developers take for granted. In particular, the ability to run code which doesn’t block the UI! This is a huge boon to mobile web app developers, and making effective use of them is critical to building a responsive app which also does CPU-intensive data processing.

Better yet, as I understand it, each web worker gets its own script context and heap, thus its own garbage collector. So if you allocate a lot of objects on a background thread, then only pass small output data to the UI thread, you can help minimize the impact of garbage collection on your UI thread.

DOM, layout, and rendering performance

In addition to the other platform aspects discussed, including the performance characteristics of your platform’s JavaScript engine, you are at the mercy of the web runtime’s HTML/CSS layout engine, its rendering implementation, and its performance characteristics for DOM manipulations. The latter is one case where I understand Chrome to excel.

My main point here is that even if JavaScript engine performance improvements do hit a wall, there are a lot of other places where performance improvements will almost certain be found. At Build a couple weeks ago I got to talk with members of the IE team about some of the improvements to the rendering aspect in IE 11 / Windows 8.1, and the work underway here sounds very promising.

Libraries

Another factor for performance of web apps is the overhead and general performance characteristics of any libraries you use. A library can be a bunch of reusable JS code (like JQuery or WinJS), or a native platform exposed to (WinRT, PhoneGap, etc). In either case, the library itself and your usage of it will affect performance.

One example is data binding. WinJS provides a handy data binding implementation, which I’ve made extensive use of. But for some situations, its performance overhead can start to be noticeable, and you probably have options for doing something more efficient for your needs.

Another factor is whether the library you’re using is tuned for the platform/runtime you’re using it on. WinJS is obviously well-tuned to IE on Windows. But others may not be, particularly if they are designed to work with outdated versions of IE.

Your JavaScript!

Yup, JavaScript is a big factor. And not just in the sense that “JavaScript is slower” or that you need to wait for JavaScript engines to get faster. How you write your JavaScript is critical to writing a fast app.

At the Build conference I attended a session by Gaurav Seth about JavaScript performance tuning. While some of the optimizations discussed may be specific to IE / Windows apps, I suspect many are generally applicable. For example, using object pools to reduce allocations and thus reduce the frequency of GC operations.

Here’s a slide from Gaurav’s deck (which I’ve annotated slightly) showing the difference in his example app just from reusing objects versus allocating new ones unnecessarily:

Perf improvements from reusing objects

The optimizations you need to do in JavaScript may not be intuitive to a C++ or Objective C developer, but that doesn’t make them reasons to scoff at the language or runtime. Every language and platform has its quirks and idiosyncrasies. If you want to be an effective developer, you need to learn them, for whichever platform(s) you target.

Re-answering why GC is a bigger problem on iOS

A large section toward the end of Drew’s post asks why, if GC is so bad that Apple can’t offer it to iOS developers, how does Windows Phone get its buttery smooth reputation when it uses an (almost) exclusively GC platform?

Well, take what we discussed above and assume similar mechanisms are in place on Windows Phone and its .NET implementation. I don’t actually know as many technical details here, but I assume the strategy is similar to Windows 8.

On Windows 8, a GC pass while you’re panning the view does not cause a stutter. On Android, apparently it does. Or maybe recent versions fixed that (I haven’t been able to figure that out – anyone know?). Neither the thread handling the input nor the thread doing the scroll animation ever runs any GC code (and the impact on CPU resources doesn’t really affect it, as the heavy lifting for the animation is done by the GPU). Yes, you can potentially scroll to a region the UI thread hasn’t yet rendered. But since the scrolling operation is fully independent, the UI thread shouldn’t have too hard of a time keeping up (and in practice, does not), and it can update the surface while you’re still panning it. So if a GC happens while panning, maybe you see some blank list items for an instant, but you don’t get a stutter.

What Drew should call his post

I think a better title would be: “Reminder that JS is crippled on iOS.” Or something along those lines. Yup. It is. But don’t blame JavaScript. Blame Apple. They don’t want you to use JavaScript because then your code would be more portable. And that’s the last thing they want!

My counter-example

Newseen is fast. Newseen is 100% JavaScript/HTML/JS. Newseen is fast on a truly slow Tegra 3 processor, and really damn fast on anything beefier. And I wrote that in about a month (client and server) with little time to optimize perf (I do have a few improvements in mind, because I’m crazy like that). Sure, the most CPU intensive work happens on the server, but for users who link their Twitter accounts, it’s doing non-trivial data processing on the client. And all in JavaScript.

Will it be fast as-is on iOS? Maybe not. A recent iPad (heck, maybe even an iPad 2) has a beefier CPU, so there’s hope. But since the platform is in some ways just not optimized at all for the web (i.e. no independent manipulations), and in other ways actively hostile to web apps (i.e. crippled JS engine outside Safari), maybe I’ll be forced to go native there. That sucks. But that’s no one’s fault but Apple’s.

Going off-topic for a moment, I’ll say that by far the biggest perf problem in Newseen is the Microsoft Advertising SDK (so Pro users not only lose the ads, but get a better performing app). The most recently SDK update seems to have slowed things down considerably, although it’s possible this is an interaction with the Win8.1 preview I’m running on my Surface RT now. That’s on my to-do list to investigate. Either way, at least when developing for Windows, the overhead of using JavaScript is the least of my concerns 🙂