Vol. 46 No. 16 · 15 August 2024

Hey Big Spender

Donald MacKenzie on what your smartphone knows about you

5341 words

Playing​ Candy Crush Saga on your phone involves moving brightly coloured sweets around to the sound of cheerful music. Get three or more identical sweets into a line, and they gently explode and disappear. Your score ticks up, and a cascade of further sweets refills the screen. If all goes well, you’ll soon complete a level. A warm, disembodied, male voice offers encouragement: ‘Divine!’, ‘Sweet!’

The iPhone version of Candy Crush was released in November 2012, and an Android version a month later. In December 2013, the BBC reported that train carriages in London, New York and other big cities were full of commuters ‘fixated on one thing only. Getting rows of red jelly beans or orange lozenges to disappear.’ It has always been free to install Candy Crush, and it has been downloaded more than five billion times, which suggests that hundreds of millions of people must have played it. More than two hundred million still do, according to the game’s makers, the Anglo-Swedish games studio King. Those players aren’t going to exhaust the game’s challenges any time soon: Candy Crush has more than fifteen thousand levels, and dozens more are added every week.

Candy Crush is big business. By 2023, it had earned more than $20 billion in total for King and Activision Blizzard, the games conglomerate that bought King in 2016 for $5.9 billion. Activision Blizzard has now itself been bought by Microsoft for $69 billion, a consolidation of the games sector that caused the UK’s competition regulator, the Competition and Markets Authority, enough concern that it initially tried to block it.

How do you make money out of a game – not necessarily big money, but at least enough to repay the often high development costs – if it’s free, as nearly all smartphone games are? Candy Crush is more difficult than it looks. Sooner or later you will end up temporarily out of ‘lives’, and at that point it’s tempting to spend a modest sum to keep playing or to boost your future chances of success. I succumbed as early as my second session playing the game. A half-price weekly deal was on offer for 99p. I tapped, was taken to Apple’s App Store, which has my credit card on file, and a thumbprint sealed the deal.

Most players of games such as Candy Crush are less easily tempted to spend than I was. The app economy analyst Eric Seufert told me that typically ‘95 per cent, 97 per cent of all users who play a game will never monetise.’ If your game is popular enough, that may still mean hundreds of thousands of players spending within it. Attracting potential spenders is therefore a vital part of what people in the business unromantically call ‘user acquisition’. Many of them, like me, will spend only small sums, and only once in a while. The most valuable players are the bigger spenders referred to as ‘whales’. Since the basic goal in Candy Crush is to get at least three digital objects next to each other in a line, it’s known in the business as a ‘match-three’ game. There are hundreds, perhaps thousands, of such games. A ‘match-three whale’ is someone who spends tens of dollars a month in one or more of them. Find a decent number of whales, and you’ve got a valuable income stream.

Charts of ratings and total downloads, word-of-mouth recommendations, favourable reviews by games journalists, endorsements by prominent influencers and general social media buzz can all bring players to a game. But building a mass user base frequently requires large-scale advertising, often on a social media platform such as TikTok, Snapchat, Facebook or Instagram. The most obvious place to advertise your game, though, is in another game. I’m a late convert to the pleasures of smartphone games. Until the last few months, someone advertising a game to me, however attractively, would have been wasting their money. But if I’m already playing a game on my phone, then I’m an a priori plausible target. More specifically, if I’m playing a match-three game, then why not advertise your own match-three game to me? It’s not going to cost me anything to try it out, and I might find I prefer its slightly different format, images, colours or soundtrack. Handily, there’s a well-established way of making me watch a video ad in whatever game I’m currently playing, and that’s to give me an in-game boost as a reward for watching it.

Which ads get to be shown often comes down simply to how much the advertiser is prepared to pay, and, as one games executive told me, the owners of other games ‘are willing to pay a lot’ – more than an advertiser for a different kind of product would – ‘because that’s where they’re going to drive their installs’. As a result, Seufert said, often ‘the overwhelming majority, 95 per cent, of all ads shown in games are for other games.’ Sometimes, an ad for a game is itself a game, a brief sample of the real thing, though that’s an expensive form of advertising, and I’m told it has become less popular recently.

‘I’m buying users from you, you’re buying users from me, a lot of revenue was materialised but actually it all got sort of negated by the fact that we’re just buying from each other,’ Seufert said. The executive I just quoted told me that ‘fundamental tensions’ come with earning money by showing ads for other games: they ‘can be competitors, which isn’t awesome’, and can cause ‘my players to churn out’. Most ads in smartphone games are sold through automated bidding systems, not in face-to-face negotiations, so it isn’t always straightforward to block specific unwanted ads. If a competitor ‘is determined enough, they can get an ad in your game, no question’.

It can, however, be hard for a games studio to say no to the revenue stream that advertising provides. Indeed, there’s an entire genre of relatively rudimentary ‘hypercasual’ games, which I’m told are in effect entirely funded by showing ads for other games. As another experienced practitioner puts it, the justification is that ‘I’m going to make some ad revenue and then I’m going to spend the ad revenue to create a marketing budget [to acquire users for my game] … You’re hoping you’re giving away your less valuable players, but I’m not sure it’s really that scientific.’

The mechanics of the advertising of games, apps and other products on phones tell us a lot about the rigours of the digital economy. In the goods economy, some firms advertise simply to keep consumers’ views of their brand well-burnished: a vehicle manufacturer wants to interest you in its cars, but doesn’t expect you instantly to buy one. By contrast, much, perhaps most, advertising on phones is designed to get a ‘direct response’: an immediate goal, or ‘conversion’, in the form of a purchase, a sign-up or subscription, an install of a game.

The work of a games studio, though, isn’t done once someone installs its game on their phone. Its chief concern is likely to be the player’s ‘lifetime value’ (the total revenue they will bring), and whether it will exceed the cost of the advertising that drew them to the game in the first place. There’s a series of ‘app events’, as they are called, that the studio will want to monitor closely. How many people install the game but abandon it before finishing the tutorial? How many keep playing to at least, say, level 5? Above all, how many of them make in-game purchases, and if they do, when, how often and for how much? That said, direct-response advertisers can’t afford to wait too long for lifetime value to become evident: they want to maximise the efficacy of advertising in real time. As one digital advertising specialist puts it, ‘you’ve got these bidders [for advertising opportunities] with machine learning that are saying this segment is working, bid higher here because there are conversions occurring. [All these] automated feedback loops are running.’

Machine learning optimisation is a service offered throughout digital advertising, but the doyens are Google and Meta. I took an online course taught by Seufert, an entire session of which was devoted to advertising on Meta’s main platforms, Facebook and Instagram. Meta enables you to specify with surprising precision what you want its advertising systems to optimise on your behalf. The obvious choice would be simply the number of installs of your game or other app. In 2016, Facebook introduced ‘app event optimisation’, which focuses advertising to generate installs from players its systems predict will perform actions of the kind you want to prioritise, such as in-app purchases. Or you could try to maximise the total revenue that players will bring. That’s what Meta calls ‘value optimisation’. It can be expensive, but it’s the technology you want if you are hunting whales.

As in most digital advertising, auctions determine which ads get shown to which users, but you don’t yourself have to take on the daunting task of working out how much to bid. Specify your goal, your budget, and perhaps the minimum ‘return on ad spend’ that would be acceptable to you, and Meta bids into its own auctions on your behalf. Seufert described the ways in which advertising on Facebook has changed. In 2015 or thereabouts he and his colleagues would hold meetings to discuss how to target their Facebook advertising. ‘Maybe we should try car enthusiasts, because … we’re trying to reach men … maybe we should target people that “like” Bruce Willis’s page because he’s an action star … That was what you did.’ However, he said, Facebook’s increasingly sophisticated machine learning has made such discussions a waste of time. ‘Facebook has basically internalised all that and now you’re just feeding it with … inputs [e.g. multiple variants of your ads so that it can test which are most effective]. None of that stuff you used to do matters.’ Now, the job of the advertising practitioner is ‘to feed this experimentation machine’.

A major digital advertising platform such as Facebook or Google is like an iceberg, Seufert said. Visible above the waterline are the characteristics of users on which advertisers tend to focus, such as age, gender and ‘interests’ such as an enthusiasm for cars. But below the surface, invisible to the advertiser and too copious to make full sense to human beings, is the much larger volume of heterogeneous data that the platform possesses. This is the data that can make platforms’ machine learning optimisation of advertising considerably more effective than human-guided targeting.

The implications of the iceberg go well beyond the advertising of games. One of the most interesting of the 95 people I have interviewed about digital advertising runs a business selling handmade saris, and has a strong commitment to preserving village handicrafts. In our first conversations, he was highly critical of Big Tech. But he has gradually learned that his best market is Tamil Brahmins, in India and in the diaspora. There is no list of them for him to work from, and on Facebook or Google ‘there’s no classification saying “target Tamil Brahmins”.’ Yet using data on ‘customers who’ve bought … from me before, who’ve interacted with my products, Google and Facebook are able to find them … My clients look for thirty-minute recipes. They look for Bollywood news. They look for Tamil cinema news … I have to trust the machine to be more effective than me to do this.’

The machine is, of course, amoral. It optimises for whatever ‘conversions’ it is told to pursue: installs of Angry Birds; sales of saris made by village women; voters signing up for a Donald Trump rally. People whose political predilections resemble mine often like to think that the explanation for Trump’s 2016 election win or the result of the Brexit referendum is cunning microtargeting by political consultants using platforms such as Facebook, perhaps funded by Russian money or informed by Cambridge Analytica’s psychometric data. But Ian Bogost and Alexis Madrigal, in an article for the Atlantic from April 2020, have a more convincing hypothesis in respect to Facebook: that the Trump campaign’s success online in 2016 resulted simply from its use of Facebook’s standard machine learning optimisation procedures.

Trump’s ads were banal, but rather than trying to build the case for him they often encouraged a specific action, a conversion: ‘Buy this hat, sign this petition, RSVP to this rally.’ Researching the ads for a Trump rally in Milwaukee in January 2020, Bogost and Madrigal found little sign of the targeting of specific demographic groups. They suggest that instead the Trump campaign’s use of Facebook began, just like my sari vendor’s, by providing its system with a ‘custom audience’: a list of people who had already taken an action, such as supplying an email address or phone number, which suggested they were Trump supporters. Machine learning can then search for ‘lookalikes’: people who resemble the custom audience. But its search for likeness would go well beyond the characteristics that a political sociologist might think of as influencing voting preferences. It would use the entire submerged portion of the data iceberg.

Bogost and Madrigal report a ‘source close to the 2016 Trump campaign’ telling them that its use of Facebook was inspired by the successes of the mobile game studio Machine Zone’s machine learning optimisation of user acquisition. But this particular model might not have been needed: by 2016 the practices they report were fast becoming standard among those who had grasped the way machine learning was changing advertising.

Thesubmerged portion of the iceberg is enormous, but exploiting its full power to optimise advertising means sorting the data within it. The crucial issue is what practitioners call ‘identity resolution’: the capacity to discern, in an automated way and with some degree of accuracy, that two or more often very different data traces involve the same human being. In advertising on phones, identity resolution largely boils down to something deceptively simple: whether or not the various data traces involve the same phone. In the early years of smartphones, it wasn’t difficult to tell. Every smartphone, whether Apple or Android, had a unique identifier number, which the phone’s owner could not alter or delete, and which was visible to the apps installed on the phone and the ad networks that displayed ads on it. Apple told app developers not to ‘transmit data about a user without obtaining the user’s prior permission’, but there seems to have been no insurmountable technological barrier to it. Smartphone apps leaked data, sometimes on a large scale. ‘Your Apps Are Watching You,’ the Wall Street Journal warned its readers in December 2010.

By that time, privacy-conscious people were regularly deleting the ‘cookies’ (strings of digits unique to each user) that websites and web advertisers deposit in the browsers on their computers. Although cookies are a web technology not available to phone apps, Apple decided to give iPhone owners a facility equivalent to purging cookies. But it wasn’t yet ready to leave advertisers without a dependable way of answering the question: ‘Is this the same phone?’ In 2012, it began denying apps and ad networks access to a phone’s permanent identifier, but instead made available to them a 32-digit Identifier for Advertisers (IDFA), which uniquely identifies a particular phone but can be changed whenever the owner wants. In 2016, Apple gave iPhone users the additional capacity to ‘zero out’ their IDFAs: in this case, when an ad network or app asks the phone for its IDFA, it receives in response an uninformative string of 32 zeros. Google introduced a similar entity, the GAID (Google Advertising Identifier), which owners of Android phones can delete in a similar way. But most people, me included, aren’t savvy enough to alter or delete our phone’s IDFA or GAID. In practice, therefore, IDFAs weren’t so different from the permanent identifiers they replaced.

Apple introduced the IDFA, an advertising technology specialist told me, ‘so that advertisers could [continue to] “attribute” marketing campaigns to their app and have a [measurable] return on investment and run effective advertising’. But the scale on which IDFAs were used to link up diverse data created capabilities that Apple had probably not fully anticipated. What it made possible was ‘effective deterministic [targeting]. They would know that you use Deliveroo to get Chinese or Vietnamese food on a Saturday, they know that you use Tinder … They’d have known bloody everything.’

I was puzzled at first by this use of the word ‘deterministic’, because little in life is that certain. I now see what he meant. If a specific IDFA is associated with, say, repeated purchases within a match-three game, and an advertisement for another such game leads the user to play that one instead, it’s very likely indeed that they will spend in the new game too. Being a whale is repetitive behaviour: you are unlikely to stop just because you have switched games. And if you were an ad network acting on behalf of multiple advertisers, then in the ordinary business of tying together ads with app installs, purchases etc, you will have collected lots of IDFAs and GAIDs with lots of records of actions on the same phone tied to them. That way, you could begin to perceive some structure in the submerged portion of the iceberg, and the efficiency of your advertising would increase markedly.

If routine business didn’t bring in sufficient numbers of IDFAs, there were ways of getting more. An ad network would typically offer games studios a fixed ‘cost per install’. There was, however, frequently a quid pro quo. It was a waste of money to serve ads to people who had already installed a particular game. So the ad network would ask for a ‘suppression list’, a list of the IDFAs or GAIDs of all the game’s existing users. Once they had the list, it was an important resource for the ad network in advertising other, similar games.

‘Whoever has the most data wins,’ the specialist said. The way in which IDFAs were being used to accumulate data and acquire capabilities in prediction, targeting and ad optimisation ‘pissed Apple off’: user privacy is an iPhone selling point. At its Worldwide Developers Conference in June 2020, Apple announced a new privacy policy, App Tracking Transparency, which it implemented in April 2021. ATT tightly restricts apps’ use of IDFAs.

The ‘prompt’ is a screen that must be shown when an app that wants to track you beyond its own confines is first installed. You tap on the prompt screen either to accept tracking or to reject it. If you tap ‘reject’, the app isn’t allowed to penalise you by restricting the features available to you. Your choice is recorded in your phone’s operating system, and if the app in question ever asks your phone for its IDFA, the operating system ensures that all the app gets is the string of 32 zeros.

Only a minority of iPhone owners tap the button to consent to tracking: typically around a fifth, I’m told. But that isn’t the end of it. Let’s say you’re trying to ascertain that a phone on which an ad for a game has been shown in app A is also the phone that has installed the game being advertised (app B). Under ATT, the only way you’ll be able to use an IDFA to make the link is if the phone’s owner taps ‘accept’ in both app A and app B. If just a fifth of iPhone owners consent to tracking in app A, the number consenting in both app A and app B is very likely to be even lower. Without IDFAs, or something to replace them, all that an advertiser or ad network has to work with is a bunch of records of the (probably very large number of) ads it has displayed, and a different bunch of records of installs of the game or purchases of the product being advertised. Connecting up the two bunches in order to measure and optimise the effectiveness of advertising is very far from easy.

Talk to people in digital advertising, and you will hear a lot of speculation, a lot of it hostile, about Apple’s motivations. (Apple’s internal decision-making has not been made public, though its chief executive, Tim Cook, was unequivocal when he said in a call to stock analysts in October 2021: ‘We believe strongly that privacy is a basic human right. And so that’s our motivation there. There’s no other motivation.’) The hostility arises because Apple’s restrictions on the use of IDFAs imperil the most important method by which sense is made of the app economy’s data icebergs. Apple was able to press ahead nevertheless because, within that economy, it has ‘infrastructural power’. This term, coined by the sociologist Michael Mann in 1984, refers among other things to the leverage that you, or a system you control, can exercise by virtue of being necessary to other people or their systems in doing the things they need to do.

Apple’s engineers write iOS, the operating system that controls every iPhone. All iPhone apps run on iOS, and some of the new privacy policy’s rules are directly embedded in it, making them hard to circumvent. If an app breaks the rules, it also faces the potentially catastrophic penalty of exclusion from another crucial part of the infrastructure, Apple’s App Store. Globally, more people use Android phones than iPhones, but iPhone owners tend to be more affluent and therefore have more money to spend on in-app purchases, and currently most of them can install apps only via the App Store. (This has changed recently in the European Union because of its Digital Markets Act, but it seems likely that most of the EU’s iPhone owners will continue to install their apps through the App Store.)

Facebook’s response to Apple’s new policy was an object lesson in the effectiveness of infrastructural power. At first it protested fiercely, in December 2020 even taking out full-page ads in the Financial Times and New York Times describing the policy as ‘devastating to small businesses’ because it endangered their ‘ability to run personalised ads and reach their customers effectively’. For more than a decade, though, users have interacted with Facebook mostly on their phones, so in practice Facebook is a phone app. And Instagram has always been an app, with the additional twist that heavy users of Instagram tend particularly to like iPhones thanks to their high-quality cameras. So non-compliance with Apple’s policy would have had effects that couldn’t be contemplated. ‘We have no choice but to show Apple’s prompt,’ said Dan Levy, a Facebook vice president at the time. ‘If we don’t, they will block Facebook from the App Store.’

Facebook may have lost several billion dollars in ad revenue as it rebuilt its systems to cope with the loss of so much finely granular data, which had effects on virtually all forms of advertising on Facebook. Other factors were buffeting tech stocks at the time too. Between September 2021 (by which time the effects of Apple’s implementation of its new policy had started to become fully evident) and October 2022, Facebook/Meta’s share price slid from $352 to $93.

Meta’s stock has, however, now more than recovered, aided by investor enthusiasm for artificial intelligence and by the increasing success of the company’s ‘retooling’ of its targeting and measurement operation. Smartphone games have been less fortunate. The greater difficulty – and therefore the cost – of finding whales and other users who will ‘monetise’ has made it two or three times more expensive to launch a game, one expert told me. And the resulting fall in advertising revenues, Seufert told me, has in turn badly affected the economics of ad-dependent hypercasual games. The previously healthy growth of the computer game sector as a whole has slowed dramatically, with widespread job losses: 10,500 globally in 2023, followed by more than 5000 in January 2024 alone.

Inthe last couple of years, the overt controversy sparked by Apple’s new privacy policy has gradually receded, but there is now a subterranean battle for knowledge of the actions you take using your phone. It is a conflict between two different ways of measuring the effectiveness of advertising. The first is Apple’s preferred mechanism, Store Kit Ad Network (SKAN), which it offers free of charge to ad networks and advertisers. Your iPhone plays an active part in SKAN: data crucial to measurement and ad optimisation, such as your taps on ads, is stored not on an external server, but in the phone’s memory.

No access to the ad data on your phone is allowed for 24 hours after it is stored, plus a further randomly varying period of as long as 24 hours, the rationale being to stop exact times being used to match ads up with subsequent actions such as purchases or game installs. Once the 24-48 hours is up, the data is sent from your phone to the relevant advertiser or ad network via an Apple server that ensures the preservation of what Apple calls ‘crowd anonymity’ (a notion that warms my sociologist’s heart). In essence, it means checking that there’s nothing about the data that stands out enough – an unusually big purchase, for example – to make your phone distinguishable from at least a moderately large crowd of other phones. As Seufert puts it, the ad network learns that ‘this campaign delivered an install’ or purchase, but Apple’s system in effect says ‘I won’t tell you who the person is.’

Apple’s preservation of crowd anonymity is a striking reversal of digital advertising’s overall trajectory, which has so far been to tailor advertising to very specific audiences, and ultimately individuals, in contrast to the inherently aggregate viewership of, say, traditional broadcast TV shows. Google is building a broadly analogous set of smartphone de-individualising mechanisms, the Android Privacy Sandbox, though it hasn’t yet committed itself to a launch date.

The second approach to measuring ads’ effectiveness tries more fully to preserve advertisers’ immediate knowledge of your actions. Since IDFAs are usually no longer available, I’m told that this can involve the use instead of Internet Protocol addresses. They’re nothing like a full substitute for IDFAs: when your phone is connected to the internet via a phone network, it may be sharing the network’s local IP address with hundreds or thousands of other phones, and that’s a form of crowd anonymity in itself. When, however, your phone’s connection is via the wifi router in your home, its IP address is your router’s address. The crowd is then much smaller: it’s the devices in your household using the same router. I asked one of my contacts whether whale hunting has continued since Apple’s changes. It has, he told me, even if it is now less precise. ‘They used to know that I was a match-three whale,’ he said, ‘but my wife wasn’t, and my two kids’ iPads weren’t. But with IP addresses, they still know my household has one match-three whale in it.’ Add in other information that may well be available, notably the model of phone and the specific version of the operating system it is running, and the crowd might sometimes shrink back to one.

For these reasons, the use of IP addresses to help measure the effectiveness of advertising is contentious. Three different well-informed people have told me that it is widespread, but it’s hard for an outsider to determine how important it is relative to other inputs to machine learning systems, such as data from Apple’s SKAN or the behaviour of the minority of users who have agreed to tracking. There is, however, another issue: timing. If you use only Apple’s SKAN, you have to wait up to 48 hours for data to arrive, and possibly longer if you want information beyond, say, the simple fact that an install has occurred. But if you can yourself gather the data you need (IP addresses, for example, can be captured without using Apple’s systems), advertising can continue to be optimised in real time.

Is all this within Apple’s rules? The App Store tells app developers that they must not ‘derive data from a device for the purpose of uniquely identifying it’, and among the examples of such data is ‘the user’s network connection’. One of my informants, though, tells me that those who use IP addresses as an input to their machine learning systems interpret ‘identifying’ a device as ‘identifying it persistently and specifically’, which an IP address does not do. Elaborating or clarifying the rule might not resolve the latent conflict. As my informant says, as soon as you do this, ‘you create opportunities to find loopholes.’ And Apple can’t simply ‘zero out’ iPhones’ IP addresses, because they’re the means by which packets of information are guided through the internet to the correct destination.

Apple could obscure iPhones’ IP addresses by encrypting them and routing messages through a relay system of computer servers. That’s what’s done (for both good reasons and to hide from law enforcement) on the dark web. By turning on Apple’s ‘private relay’ system, subscribers to Apple’s premium iCloud+ service can already conceal one aspect of their online behaviour, web browsing, in this way. If Apple were to go further, and more thoroughly obscure the operations of all the world’s iPhones, it would probably spark strong government opposition, not least in China, an important iPhone market. It would also involve a great deal of additional processing and electronic traffic, and could palpably increase the internet’s already high energy consumption and carbon emissions.*

Tensions such as this haunt our attitudes to the digital economy. We want privacy, but we also want free information and entertainment, the economics of which often depend on targeted advertising. We are excited by the capacities of giant-scale, electricity-hungry artificial intelligence, while also knowing that we have to reduce carbon emissions. We value Big Tech’s sophisticated services and protected digital environments (the App Store, for instance, is good at blocking malware), but we also want to open them up to healthy competition.

Must the balancing of such priorities remain solely in the hands of the private sector? A glimpse of what might be possible is the current role of the UK’s Competition and Markets Authority in monitoring and evaluating Google’s intended phase-out of another of the central mechanisms of digital advertising – the tracking of users across websites using cookies – and its replacement with a Privacy Sandbox for Google’s Chrome, the world’s most widely used web browser. The competition law concerns that have swirled around Google, and fears that the change will increase Google’s market dominance, prompted a legally binding agreement between it and the CMA. This in effect gives the CMA the power to stop the Sandbox being rolled out if it has features that unduly advantage Google.

This experiment in public policy will have global consequences (Google’s changes will not be restricted to the UK), and it became more complicated in July when Google unexpectedly announced that instead of phasing out tracking cookies, it intends to give Chrome users an as yet unspecified ‘informed choice’ about them. This may also need the approval of the CMA and perhaps the Information Commissioner’s Office too. That’s how it should be. Preserving privacy is often in tension with the fostering of competition, but a productive balance between them is possible, and astute public policy can help find it.

Send Letters To:

The Editor
London Review of Books,
28 Little Russell Street
London, WC1A 2HN

letters@lrb.co.uk

Please include name, address, and a telephone number.

Read anywhere with the London Review of Books app, available now from the App Store for Apple devices, Google Play for Android devices and Amazon for your Kindle Fire.

Sign up to our newsletter

For highlights from the latest issue, our archive and the blog, as well as news, events and exclusive promotions.

Newsletter Preferences