Apple.com/switch, right? —

From Win32 to Cocoa: A Windows user’s would-be conversion to Mac OS X, part III

From the archives—Windows disappoint led to an earnest look at the Mac, "my mistake."

All hail Chairman Jobs

The other big deal is that, well, Apple is Apple. Apple, as a company, prides itself on being a leader, not a follower. As Steve Jobs famously quoted Wayne Gretzky, "I skate to where the puck is going to be, not where it has been." So the charismatic (some might say dictatorial) Apple leadership wants the company to be seen as one that looks forward, not backward.

This attitude is manifest in Apple's products, including its all-important OS. Apple has never been a company that's slavishly devoted to backwards compatibility if that backwards compatibility jeopardizes its ability to move forward. In these days of Intel Macs, the OS is the biggest differentiating feature of Apple's hardware. People buy Macs because of the software ecosystem, so keeping the OS (and the programs running on it) high quality is extremely important.

So, Apple uses Cocoa for its applications. Not all of them—Apple's bought-in applications generally use Carbon, as well as a few high profile parts of Mac OS X itself. But nonetheless, there are high-profile Apple applications, like Safari and Aperture, that are built using Cocoa. This has several benefits. The most obvious one is that it means that Apple is actually using the APIs that it's developing. It has to make sure that the APIs work well and do the things that a wide variety of developers want to do, because if they don't, that just hurts Apple's own software.

With Apple, this is a two-way process. Apple's applications don't just prove that the API works; they're also used to develop new UI devices that in turn get rolled into the OS. For example, a kind of "palette" window used for inspecting and adjusting object properties was used in iPhoto and other applications. This is something that lots of software can make use of, and so with Mac OS X 10.5, a system-level palette window was introduced. Instead of a proliferation of slightly different first- and third-party implementations of the concept, Apple has taken a good idea and exposed it to any developer.

This two-way feedback process doesn't just happen with the GUI, of course. Apple's applications draw on many different parts of the API, so for any framework that Apple provides, there will be an Apple application using that framework (depending on it, in fact). This informs a lot of Apple's design decisions, and it's something that Microsoft lacks.

Apple has always been a company that <a href="https://arstechnica.com/gadgets/2014/10/the-ipod-is-dead-and-apples-love-for-the-unknown-band-is-too/">takes its audio fairly seriously</a>.
Enlarge / Apple has always been a company that takes its audio fairly seriously.
Aurich Lawson

Sound design

An example of this phenomenon at work can be found in Apple's and Microsoft's audio APIs. Apple has both high-end and low-end audio software that uses its audio frameworks; Microsoft doesn't. In Windows Vista, Microsoft tried to produce a new audio API that was suitable for low-latency multi-channel audio software. This required changes to the whole audio stack, from the applications right down to the drivers. It's the drivers that Microsoft had problems with.

Audio drivers were an area where Vista had trouble, especially in the first months after its release. There just weren't many out there, and those that did exist were often feature-deficient compared to their XP predecessors. Although the blame for this lies in part with the hardware vendors, Microsoft must take some of the blame. The reason for this is that Microsoft's first attempt at a new low-latency audio stack didn't work particularly well. The core part of an audio stack is buffers filled with sound data; the application fills the buffer, the sound card plays data from the buffer. If your buffers are too big, you incur higher latencies, because you have to wait until a buffer-full of data has been used before you can change it (approximately speaking). However, if they're too small, you run the risk of skipping and stuttering, because the sound card can run out of data before the software has had a chance to put more into the buffer.

The first new Vista audio API didn't provide a way for drivers to tell applications "the buffer is nearly empty". Instead, applications had to check "Is the buffer empty now?". And because the buffer status changes as the sound is played, they had to check over and over again. "Is the buffer empty now? Is the buffer empty now? Is the buffer empty now? Is...". This is called polling, and as a general rule of thumb, polling-based systems are to be avoided. Each time the application checks the buffer, it's having to do extra work, and if the buffer still has plenty of data in it, it's pointless work. The thing is, this flaw is kind of obvious to anyone writing audio software. No-one likes polling. Other low-latency audio APIs (Steinberg's popular ASIO and Apple's Core Audio included) tell the application when the buffer needs more data.

This design issue would be obvious to anyone writing an audio application. But it apparently didn't occur to Microsoft. It didn't occur to them with DirectSound either, which has no reliable, low-latency mechanism for receiving buffer notifications either. It wasn't until Microsoft sat down with third-party audio developers quite late in Vista's development cycle that it changed the driver API to introduce a notification mechanism. With this, the hardware can actually tell the application that it needs more data. By the time the change was made, vendors had already put work into the first new Vista driver model, so they probably weren't entirely happy that Microsoft went ahead and changed it (though the audio software vendors certainly were).

Apple, on the other hand, does use its APIs. So Apple has a much better idea of what works well, and a much better idea of what the system ought to do and how it ought to work. Microsoft provides APIs to third parties and hopes that they'll be OK; Apple provides APIs to itself, and when it's certain that they work well, it lets third parties loose on them. If something's good enough for Apple, it's probably good enough for everyone else. There is some irony in this; in software development this concept of using your own software is known as "eating your own dogfood," and it's an idea that was, at one time, pushed strongly by Microsoft.

So... is Peter Bright going to <a href="https://arstechnica.com/gadgets/2005/08/949/">join Will Ferrell</a> or what?
Enlarge / So... is Peter Bright going to join Will Ferrell or what?
Apple

Where do you want to go today?

The final point in Apple's favor is that Apple has a direction. Apple is committed to Objective-C, Cocoa, and the surrounding frameworks. This is perhaps something of a new development. In the early days of Mac OS X, Apple hedged its bets. Cocoa was less complete, and the C Carbon API was promoted not only as a way of porting old "Classic" Mac OS applications but also a way for writing new programs. (There are still places in Apple's developer documentation where Carbon is described as suitable for new applications.) Further, in the early days, Cocoa wasn't just for Obj-C; it could also be used from Java. The reason for this is that Apple just wasn't sure that developers would go for Cocoa. For good or ill, learning a new language is seen by a lot of people as a big hurdle. The worry was that the combined burdens of having to learn a new language and a whole new way of putting together applications would scare off developers, so the far more familiar programming model of Carbon was pushed as a first-class alternative to Cocoa. I've even heard it argued that one of the reasons that the Finder is a Carbon application was to demonstrate to developers that Carbon was an important piece of the Mac OS X puzzle, and not something that they should dismiss as "legacy."

2008 was a much simpler time.
Enlarge / 2008 was a much simpler time.
Apple

Nowadays, that's not the case. The Java bridge was deprecated in 10.4, and isn't subject to further updates. Carbon is still there, and some of the non-GUI parts of Carbon are still important, but for GUI programs, Carbon is clearly on the way out. We know this because there's no 64-bit version of Carbon's GUI classes. Apple started work on them, with a view to shipping them as part of 10.5, but late in the development process Apple decided it wasn't going to bother. If you want to write 64-bit software with a GUI on Mac OS X, you're going to write it with Cocoa.

Although I think Apple would always have preferred for Cocoa to win out and become the number one way to write applications for Mac OS X, it's only with 10.4 and 10.5 that it's been obvious that this is going to be the way to write new software. If there's a problem with what Apple has done, it's that it wasn't proactive enough about explaining "It is not a good idea to write new applications using Carbon". The company clearly has a direction; it shouldn't be afraid of that. Related to that, there really ought to be an effort to move applications with a Carbon UI to the Cocoa world.

This willingness to leave old technology behind is a great strength of the Apple platform. Rather than enshrining past decisions in perpetuity, Apple has a willingness to say "enough's enough; this new way is better, so you should use it". There's no denying that this is a double edged sword. The upside is that this approach lets Apple concentrate its (relatively limited) resources, and if all software (eventually) uses only one toolkit, the UI experience will be much more consistent and familiar to users. The downside is that it does requires more work for software developers. Those with large Carbon applications now have a tough choice; rewrite their UIs, or remain 32-bit forever. One high profile practical consequence of this dilemma is that the next version of Photoshop (CS4) will be 64-bit on Windows but not on Mac OS X; a Cocoa UI won't be available until Photoshop CS5.

Although this does cause some short term inconvenience, in the long run it makes for a more nimble platform that offers a better experience to users. Apple can make sweeping changes (switching from PowerPC to x86, say) without having to think about how to migrate every last bit of legacy technology. Users benefit from applications that are actively maintained, which leverage new OS features as and when they're developed, and which work in a consistent way without surprises. The regular pruning of dead wood that Apple's software ecosystem undergoes makes the whole thing much healthier.

A tearful farewell

All things considered, Apple is offering an attractive platform. The APIs are robust, the tools are good (and getting better), the design philosophy is coherent, and the platform as a whole has a direction. The company will continue to improve and refine the experience for users and developers alike.

But it's not without some regret that I move away from Windows. There are good things that come out of Microsoft. I like Visual Studio a lot, I think Office 2007 is fantastic, and there are parts of the .NET platform that could be very good. I think Microsoft could—and should—do better. And if I get around to part four of this series, I hope to look at just how this might be achieved.

Editor's note from the future (May 2018): In our best Ron Howard narrator voice, "But he didn't switch." Peter Bright remains Ars' Microsoft-using guru to date. When asked about resurfacing this series, he put it plainly: "My main concern is people complaining that I never finished... after abandoning the idea and ditching macOS almost entirely because Apple just doesn't build the hardware I want, lol.

Though I bet a lot of my complaints still have some truth today."

Channel Ars Technica