Privacy Metaphors

privacyWe are living through a privacy tipping point.  Technology is dramatically changing what is possible in terms of surveillance, monitoring, persistence, analysis.  We are cracking open the lid of Pandora’s Box. We still don’t really know what’s in the box.  Maybe the benefits of the disappearance of privacy will outweigh the negative outcomes.  Is it possible that the quantified self will be worth trading for the all-seeing eye of the corporation or state? We don’t understand the full story yet.

One thing that IS clear is that this is an important time to stop and think.  Before we give away privacy in ways that may (already) be very difficult to undo, we ought to slow down and consider the implications.  That’s what brings me to write about this.  I am not an expert on privacy but I believe it to be important enough of an issue that it will require all of us to come up with an approach to privacy that realises the benefits of technology without undermining our rights and our autonomy.

We know that both corporations and governments are actively collecting data about us. We are not happy about the covert collection of data about us by governments but ironically we queue up to give away our privacy to corporations in exchange for services.  We know that proliferation of technology is making it harder to be anonymous.  In particular, the smarter our mobile phones get, the more data they leak to governments and corporations alike.

We’ve known for some time that it doesn’t take many pieces of data to uniquely identify someone.  More than twenty years ago, researcher Latanya Sweeney showed that with just three pieces of data (date of birth, gender, and postal code) she could identify 87% of the US population.   In 2006, researchers Arvind Narayanan and Vitaly Shmatikov shocked Netflix by de-anonymising a massive dataset of recommendations that Netflix had released after stripping it of what they thought was all personally identifiable information or PII as it is known in the data industry.

That is disturbing enough on its own but it has actually gotten worse since then.  Last year researchers from MIT were able to uniquely identify individuals from cell phone records using just four data points that indicated location and time.  In fact, location data turns out to be incredibly informative.  In a competition called the Nokia Mobile Challenge, researchers were able to estimate the user’s gender, marital status, occupation and age based on location information alone.  Researchers on location tracking point out that the accumulation of data is significant.  What is anonymous is small amounts becomes PII in large amounts.

Some Massively Open Online Courses (MOOCs) are now collecting keystroke information on students which they use to uniquely identify students.  The goal of eliminating fraudulent behaviour in MOOCs is laudable but the collection of this data raises privacy issues.  How would you know for certain when this data is or is not being collected from you?  What if this data found its way into the hands of other less scrupulous organisations who might conceivably use it to find you anywhere on the Internet?

So we live in an era where it is becoming increasingly challenging to protect one’s privacy.  In fact, I am told that de-anonymisation researchers have recently reached the point where some are choosing not to publish some of their research results because they might be used to further undermine privacy.

My Data

One popular reaction to the problem of the erosion of personal privacy is to attempt to reclaim privacy through personal data control where we are able to establish and exert our own individual preferences in order set our own boundaries for privacy.  The notion of privacy being an individual transaction where we are each allowed to choose whether to share or not to share PII sounds like a great improvement on what we have now where we have very little individual control.  Laura James of the Open Knowledge Foundation makes the case in a blog post that the right to choose should be an essential element of “my data”.  She says, “if it’s my data, just about me, I should be able to choose to access it, reuse it, share it and open it if I wish.”

Until recently I would have said that this was hard to argue with but what I have learned recently has made me realise that privacy cannot be so easily reduced to individual transactions.  In his excellent lecture series, Snowden and The Future, Eben Moglen makes the case (in Part 3) against privacy as a transactional issue.  He points out that “If your family contains somebody who receives mail at Gmail, then Google gets a copy of all correspondence in your family.”  Your personal decision has privacy implications for everyone you know.

Perhaps even more worryingly, researcher Scott Peppet argues that decisions to reveal personal information publicly have implications for those who choose not to.  He suggests that people with “valuable credentials, clean medical records, and impressive credit scores will want to disclose those traits to receive preferential economic treatment.”  Pressure is then put on those with only marginally less valuable credentials to share in order to benefit.  Peppet argues that others could find they also need disclose PII in order to avoid negative inferences that may be drawn through staying silent.

New Metaphors

So apparently we need a new way of looking at privacy issues.  Researchers Paola Tubaro and Antonio A. Casilli have explored a multi-dimensional agency-based model.  In their research, they found that a tendency to share more online was accompanied by a counter-tendency among people to protect themselves online.  This plays out in complex ways in which we all influence each other through our privacy (or lack of privacy) practices.

Eben Moglen has suggested that, from a legal perspective, privacy is much more like an environmental issue than a transactional issue.  He points out that “environmental law is not law about consent. It’s law about the adoption of rules of liability reflecting socially determined outcomes: levels of safety, security, and welfare.”  Perhaps this is a better way of looking at privacy.  I wonder what the privacy equivalent of a fine for littering is?

As I was reflecting on this I wonder if we might look at privacy from a health perspective and consider certain privacy practices as “vaccines” against the more egregious invasions of personal privacy.  The notion that privacy is a social thing seems almost oxymoronic at first glance but the closer you look, the more evident is that privacy is something that we collectively engage in but benefit from individually.

I am still digesting these ideas and reading more.  I hope to see something from YOU too.  Privacy is something too important to be left up to technological determinism or to twenty-something billionaires.  We all need to read, think, and ENGAGE.

I am grateful to @barefoot_techie for links to many thought-provoking articles and for the opportunity recently to listen to privacy researcher Kate Crawford.  Privacy image courtesy of g4II4is

Africa’s LTE Future

If you follow communication infrastructure in Africa, you would be forgiven if you have begun to think of LTE as the promised land.  There is no doubt that nobile networks have transformed access on the continent.  Now, we are apparently just waiting for the roll-out of LTE to complete the revolution and provide high-speed broadband to all.  This article looks at how LTE is evolving on the continent from the perspective of spectrum and device manufacturing.

LTE Spectrum

africa_lteIn the early days of mobile, spectrum was pretty simple.  Your GSM mobile phone usually supported 2 different bands, 900MHz and 1800MHz for Region 1 which covers Europe and Africa or 850MHz and 1900MHz for North and South America which is Region 2.  There’s also Region 3 which covers Asia but this is complicated enough for now.  The next type of phone to be seen was the tri-band and quad-band phone that embraced the global traveller allowing them to operate on mobile networks in both Region 1 and Region 2.  Anyone remember the Nokia 6310i?

Then came 3G mobile services which introduced new spectrum bands, 2100MHz in Africa and a number of different spectrum bands in North America.  At that point Nokia was still the dominant manufacturer and had a huge range of phones aimed at different markets.  Mobile phones tended to be very tied to national operators.

With the introduction of the Apple iPhone and what we now know as smartphones, things got more complicated.  Because popular smartphones are global brands, manufacturers like Apple want to sell just one phone but were actually forced to manufacture two or more different models in order be compatible with the spectrum regimes in different regions.  The original Google smartphone, the Nexus One, came in two different versions.  The version I bought works as a phone in both Africa and North America but I only get 3G in Africa because it isn’t designed for North American 3G frequencies.

And now LTE.  The standards body for LTE, the 3GPP, have defined over 40 unique spectrum bands for LTE.  Currently the most advanced smartphones in the world like the iPhone 5s or the Samsung Galaxy S4 can support a subset of those bands.  Apple have five different versions of the iPhone 5s for sale globally that support different combinations of spectrum bands and technologies.  The iPhone has arguably the widest support for LTE with about ten different bands supported compared to about five bands supported by the Galaxy S4.  In both cases we are talking about US$800 phones.  The challenge of producing an affordable, flexible LTE mobile phone for Africa has a long way to go.

LTE in Africa in 2014

Currently there are nine countries in sub-Saharan Africa that where LTE networks have been launched, a total of eighteen operators in total.  Here’s how it breaks down.

Company Frequency Launch Date
Angola
Unitel 2100MHz (Band 1) Dec 2012
Movicel 1800MHz (Band 3) Apr 2012
Mauritius
Orange Mauritius 1800MHz (Band 3) Jun 2012
Emtel 1800MHz (Band 3) May 2012
Namibia
MTC 1800MHz (Band 3) May 2012
TN Mobile 1800MHz (Band 3) Nov 2013
Nigeria
Smile Telecom 800MHz (Band 20) Mar 2013
Spectranet 2300MHz (Band 40) Aug 2013
South Africa
MTN 1800MHz (Band 3) Dec 2012
Vodacom 1800MHz (Band 3) Oct 2012
Neotel 1800MHz (Band 3) Aug 2013
Telkom / 8ta 2300MHz (Band 40) Apr 2013
Tanzania
Smile Telecom 800MHz (Band 20) Aug 2012
Uganda
Smile Telecom 800MHz (Band 20) June 2013
MTN Uganda 2600MHz (Band 38) Apr 2013
Orange Uganda 800MHz (Band 20) Jul 2013
Zambia
MTN 1800MHz (Band 3)? Jan 2014
Zimbabwe
Econet 1800MHz (Band 3) Aug 2013

Source:  4G Americas Global Deployment Status  - Updated January 10, 2014

The first thing to know about the above is that none of these LTE networks are carrying voice traffic.  Voice over LTE or VoLTE, the emerging LTE standard for voice communication, has not been deployed anywhere in Africa.  This means that even networks that are offering LTE smartphones are still using GSM or 3G circuit-switched networks to carry voice traffic.  The move to VoLTE will be a big technical leap when it happens as LTE is the first generation of mobile connectivity to be entirely based on Internet protocols. Managing voice and data on the same network may present interesting new challenges for voice quality.

Movicel in Angola was one of the first networks to launch in Africa.  With Movicel, an LTE dongle will cost you about US$370 and they claim download speeds of up to 100Mbps.  The iPhone 5s is available too and that will set you back US$1500.  This is a service clearly aimed at elites, for the time being.

Some LTE networks are aimed exclusively at data users.  Smile Telecom, who have networks in Tanzania, Uganda, and Nigeria, offer a data only service.  The reason for this is largely historical as Smile attempted to launch WiMax networks in Uganda and Tanzania and learned a painful lesson about the importance of having a manufacturing ecosystem around the network devices.  The WiMax mobile handset never took off and as a result neither did Smile’s networks.  They must have deep pockets though as they have been able to leverage their existing investments in 800MHz spectrum to launch brand new LTE networks in each country.  They are staying away from handsets this time though and offering data services through dongles.  For more depth, Telecom.com have an excellent profile of Smile and their LTE strategy.

New Spectrum

For the time being, most African operators are recycling their existing spectrum for new LTE services.  It speaks to how much spectrum most of the big operators have that they can afford to do this and still maintain 2G and 3G networks.  There is a big push for new spectrum to be made available for LTE though especially in the 700MHz and 800MHz bands. This will bring new opportunities and new challenges.  A brand new iPhone 5s that works on any of the brand new LTE networks above, won’t work on 700MHz spectrum.  Manufacturers will be increasingly challenged to develop phones that suit different regions as countries prioritize different ranges of spectrum for release.

Manufacturers are likely to have time to work on this however as releasing what is now hyper-valued spectrum in a manner that encourages a competitive environment is proving to be a challenge.  The ongoing 700MHz auction in Canada is a good example of this.  As governments strive to encourage new competition, existing operators are likely to push for a hands-off approach which favours the incumbents.  This tension might well lead to further delays in the release of spectrum.

How Africa’s LTE Future Might Be Different

Unless a multi-band, affordable LTE smartphone appears on the horizon, LTE phones are going to be irrelevant to the vast majority of people on the continent.  However, the potential for LTE data is huge.  Data dongles, which are much more affordable (about ~US$ 70), can be used to backhaul data to a community and serve a variety of consumers.  This is what makes WiFi such an important complementary technology as WiFi-enabled phones and tablets tethered to an LTE-powered hotspot are a much higher high-value proposition than a single smartphone.  A challenge remains in the economics of bringing LTE to sparsely populated rural areas but what hopefully we are beginning to see now is the emergence of a much more interesting and potentially resilient ecosystem of communication access where a variety of technologies can serve the last mile: LTE, WiFi, whitespaces, and inevitably some things we haven’t imagined yet.

GSM and Dynamic Spectrum

MTN Coverage Map from South Africa

MTN Coverage Map from South Africa

Adoption of Television White Spaces (TVWS) spectrum saw great progress in 2013.  The Google-sponsored Cape Town TVWS trial was completed and the results were an unqualified success.  TVWS trials got underway in Malawi.  And Microsoft pushed ahead with pilots in Kenya and soon Tanzania and South Africa.

While TVWS has great potential in Africa thanks to the relative emptiness of television broadcast spectrum and the need for affordable rural broadband solutions, it is worth looking more broadly at potential of dynamic spectrum allocation beyond the VHF and UHF bands.   In particular, I want to raise the possibility of applying the concept of dynamic spectrum allocation to the GSM bands.  To many this will sound like heresy as most of the available spectrum in the GSM bands has already been assigned to mobile network operators (MNOs) for their exclusive use and that spectrum IS in use. At least we are led to believe that is true but it is hard to know comprehensively as most communication regulators don’t publish an up-to-date list of spectrum assignments for their country and very little is done in terms of actually spectrum occupancy surveys and analysis.

So what can we say from practical experience.  Experience tells that in most parts of sub-Saharan Africa, you don’t have to go too far out of an urban area or off a main highway to see coverage become much less reliable.  3G disappears almost immediately and even basic phone coverage can disappear pretty quickly especially in hilly areas.  The reason for this is that the economics for deployment in sparsely populated rural areas often doesn’t make sense for MNOs.  They can’t make enough money to justify building and operating a base station in many rural areas.

This is a bit of a raw deal for rural dwellers who are increasingly marginalised as the value of having communication access continues to rise.  The message from MNOs is that either the government must subsidise them to deliver rural services or they must wait until technology evolves and/or becomes cheap enough to make rural deployment practical.  The truth is that there is very little pressure on MNOs to deliver in rural areas.  The rural poor have no voice and don’t command the attention of politicians.

But just because the economics for rural access doesn’t work for the MNOs doesn’t mean there isn’t a model that can work.  Low cost GSM basestation manufacturers like Range Networks in the US and Fairwaves in Russia are producing affordable GSM basestation equipment that can be deployed for less than five thousand US dollars.  However, the current spectrum licensing environment forbids them access to GSM spectrum that has already been assigned to MNOs, even when that spectrum is not in use, as may be the case in many rural areas.

But what if you could use that spectrum?  What if, just like Television White Spaces, it were possible to have dynamic access to unused GSM spectrum?  This is not nearly as far-fetched as it may sound.  In Mexico,  Rhizomatica, a grass-roots non-profit organisation, have done just that.  They have sought permission from the regulator to use GSM spectrum in un-served villages near the city of Oaxaca.  According to founder Peter Bloom, they were able to take advantage of provision within the Mexican constitution which says that an indigenous community has the right to own and operate its own media infrastructure. Also, Mexican telecom law states that whenever a frequency is not being used in a specific area by the concession holder, that the ministry has the right to assign that frequency for social coverage purposes, or find other available and relevant spectrum.

Armed with that information Rhizomatica were able to approach the Mexican communication regulator with a letter signed by more than thirty indigenous communities seeking access to spectrum.  They were then invited to submit a formal technical proposal which led to an experimental license being given to use an un-assigned block of frequencies in the 850Mhz band.  With that they have been able to provide GSM communication services to over a thousand people.  Their success has been profiled by CNN, the BBC, and others. They are not the only example either.  Researchers at UC Berkeley have partnered with a local non-profit in Papua, Indonesia to put up a similar community GSM network.

These are small but bright examples of how rural communities might solve their own connectivity problems but not every community has the wherewithal to petition a regulator for spectrum.  But what if things were different?  What if, the same geo-location database technology that has been proposed to manage Television White Spaces spectrum could be used to make unused GSM spectrum available to small operators on a secondary use basis? Then small rural operators could be set up without a long administrative process. Why not?

But wait, it gets better.  The very same researchers at UC Berkeley have come up with a brilliant innovation that allows them to carry out spectrum sensing using ordinary GSM handsets. Their innovation, which they have already implemented as proof-of-concept on the the low-cost GSM technology that they currently use, enables each mobile phone connected to their network to sense occupancy in the GSM spectrum band.  The base station can then dynamically move away from any occupied frequency in the area. This concept works on even the most basic of mobile phones.  This approach either alone or combined with a geo-location authentication database could offer an effective guarantee of non-interference to the regulator.  A full explanation of this approach is available in a research paper published a few weeks ago.

In summary:

  1. The social and economic cost of not having access to communication is rising.
  2. The current paradigm of spectrum management is not enabling access for economically poor, sparsely populated, rural areas.
  3. The solutions to this problem are available to us but it is going take some communication regulators with courage and vision to allow these new approaches to take hold.

 

 

 

 

 

Spectrum Auctions for Beginners

In the previous article, I looked at the merits of licensed vs. unlicensed spectrum and suggested that there might be scope for some new approaches. Here we’re going to deal with licensed spectrum and the process of auctioning spectrum which has become the dominant means of assigning popular licensed spectrum frequencies.

Spectrum auctions are now generally accepted as a “best practice” for assigning spectrum where demand exceeds availability. When spectrum was plentiful, administrative assignment seemed to work well enough but as demand for spectrum increased with the growth of wireless technologies, new methods were required. In the United States, the regulator experimented with spectrum lotteries where anyone could join a lottery for spectrum access. So called “beauty contests” where applicants for spectrum are qualitatively evaluation according to a set of criteria have also been used. It is easy to see the flaw in the randomness of a spectrum lottery but what’s wrong with a beauty contest? On the surface it seems eminently sensible to declare a set of policy goals and eligibility criteria and then evaluate applicants to find the most suitable. The reality is that the qualitative nature of these decisions tends to leave them open to challenge by disgruntled losers, especially if those losers are particularly well-resourced. Worse the decision-makers in a spectrum beauty contest become targets for influence.

auctionAnd this was exactly what economist Ronald Coase argued in 1959, that political pressures would inevitably result in misallocation of spectrum and administrative entities lack the decentralised information necessary to allocate spectrum effectively. Building on his own theory of economic efficiency he argued that if “property-like” rights were associated with spectrum licenses and transaction costs were low that the market could most efficiently organise the assignment of spectrum.

Coase argued that bid prices in auctions are a useful proxy for how organisations value spectrum with the assumption that those that value spectrum most would go on to create the highest social and economic value with that spectrum. Ignored for years, he was vindicated in 1994 when the U.S. regulator implemented the first spectrum auction. Since then spectrum auctions have gone on to become the dominant model for assigning high-demand spectrum.

Auction Goals

If the goal of a spectrum auction were simply maximising revenue, then auctions would not be nearly as complicated as the currently are. Regulators are often trying to achieve multiple goals through a spectrum auction, including:

  • ensuring efficient use of the spectrum band;
  • promoting a competitive telecommunications market and avoiding unbalanced concentration of ownership of spectrum;
  • avoiding manipulation of the auction; and,
  • generating public value in the form of revenue from the auction.

Achieving the right balance of all of the above turns out to be quite challenging. An auction is a game in which players strive to get as much spectrum as possible for as little money as possible. Auction participants often use the latest in game theory expertise and software modelling to optimise their outcome from a spectrum auction.

Because of the large amounts of money now involved (the 2008 700MHz auctions in the US generated nearly 20 billion dollars in revenue for the government) the rules of participation in a spectrum auction need to be crystal clear in order to both maximise trust in the process thereby encouraging participation and to avoid litigation as a result of an accusation of unfair play by a participant.

This means that auctions are now quite expensive to organise and run, sometimes costing in excess of a half million dollars. In fact, most auctions are now outsourced to spectrum auction consultants who assist in the design of the auction to achieve the government’s strategic goals as well as the execution of the auction.

It is safe to say that any regulator is well-advised to hire a spectrum auction consultant if only because they can be sure that the auction participants will be hiring auction experts as well.

Auction Types

There are two dominant types of spectrum auctions in use today.

Simultaneous Multi-Round Auction (SMRA)

This is the oldest and best known form of spectrum auction. In it, multiple lots of spectrum are auctioned simultaneously in a series of rounds. In this case a individual lot refers to a specific frequency band in a specific geographic area. An auction would typically have multiple frequency bands available in multiple regions. An SMRA auction is dependent on the regulator allocating the specific frequency lots prior to the auction so the bidders know exactly which chunk of spectrum in which geographic region they are bidding on.

Each round of bidding sets the standing high bid for each lot and the rounds continue until there is no excess demand. The great strength of this auction type is that it is comparatively simple and well-understood by all.  This facilitates speed in organising the auction as well as maximising the chance of participation.

Unfortunately the SMRA auction also has numerous drawbacks. The predetermined nature of the spectrum lots can lead to inefficiencies where some bidders may end up with non-contiguous blocks of spectrum. This may happen inadvertently or it may be a result of predatory bidding by competitors.

There is also the danger of prices going unnecessarily high with the winning bid paying significantly more than would have been required to win, the so-called “winner’s curse”.

Finally, there is the danger of tacit collusive behaviour on the part of bidders where an informal understanding among bidding entities may result in lower bid prices as each company targets a spectrum band that is understood to be theirs. Auctions designers have introduced rules into SMRA auctions that make this more difficult but it remains a risk.

Combinatorial Clock Auction (CCA)

The Combinatorial Clock Auction or CCA was designed to address many of the shortcomings of the SMRA auction. In a CCA auction, participants bid on generic lots of spectrum rather than individual lots. This means that the band plan for a given frequency at auction is not pre-determined as it is in an SMRA auction but is calculated after the auction in a manner designed to optimise the outcome for all successful bidders. This increases the likelihood of optimal use of the spectrum band as well as making it more difficult for participants to engage in collusive behaviour.

The auction then proceeds not by bids but by a “clock” process where in each bidding round a “clock” which represents the value of a generic lot of spectrum is incremented by a set amount. Bidders simply indicate whether they would be prepared to pay the price on the clock in that round. This is a subtle but important difference from the straightforward bid in the SMRA auction in that the clocks facilitate prices discovery among the participants. That is to say that the participant collectively inform each other through this process of how they value spectrum. Even with the best experts in the world, spectrum is a notoriously difficult thing to value and the price discovery function of a clock-style auction performs an essential function in normalising the value among the bidders.

Finally, instead of the winning bidder paying the full price of the winning bid, they pay the second highest price so it is still the case that whoever values the spectrum most wins but they pay no more than the minimum required to win. In a combinatorial auction with multiple lots and winners, this can get a bit complicated but the “2nd price rule” has evolved to cope with these complexities.

So, why doesn’t everyone just carry out CCA auctions now? Well, CCA has its weaknesses too. First, it is a much more complex auction to design and execute. This makes it a much more expensive undertaking and it also increases uncertainty on the part of participants who will not know exactly what spectrum they will get until the auction is over. That may be a disincentive to participation. CCA auctions work best with larger spectrum auctions often including multiple spectrum bands.

Spectrum Reserves in Auctions

A final contentious area of spectrum auction design is that of the reserve price or minimum opening bid for spectrum. Set the reserve too high and potential bidders may choose not to participate. Set it too low and run the risk of not realising the full value of the spectrum. There appears to be an unfortunate trend in spectrum auctions to place a disproportionate weight on the revenue generated from the auction as opposed to the revenue that will be generated by the cheaper and more pervasive access to telecommunications that occurs when spectrum is assigned efficiently and effectively. The economic impact of communication infrastructure is well-documented but seems to pale compared to the priority of maximising revenue from spectrum auctions. This seems like a mistake worth avoiding.

Activity Rules, Caps, Set-Asides

Spectrum auctions can be tweaked in different ways in order to prevent bad behaviour, encourage competition and/or limit the power of dominant players. For example, in order to last-minute bidding, activity rules can be established ensure that bidders must participate throughout the auction. Caps on spectrum ownership can be set to limit how much spectrum one player can own. Spectrum set-asides can be created to ensure spectrum is available to new players in the market. All of these tweaks are established to encourage specific outcomes and sometimes this works. It is also true however that the more complex a spectrum auction becomes, the greater the chance there is of an unexpected, undesirable outcome.

In Summary

As I read the news of spectrum auctions from around the world, I am reminded of what Winston Churchill said about democracy, that is was “the worst form of government, except for all those other forms that have been”. Perhaps that might be a good description of spectrum auctions, being the worst way to assign spectrum except for all the other ways that have been tried. Given the value that is now placed on spectrum, the risk involved in spectrum auctions for both the government and for bidders grows ever higher and as a result any failure to effectively and efficiently assign the spectrum grows ever more costly. I don’t expect spectrum auctions will go away any time soon but it does seem that we need a risk mitigation strategy to complement auctions of licensed spectrum that hopefully will reduce the impact of failure when it happens. Dynamic secondary use of spectrum such as “White Spaces” spectrum is likely one answer to this but as technology enables more nimbleness both at the wireless interface as well as at the management interface, new ideas and approaches are likely to continue to emerge.

Spectrum — To License or Not To License

In part one of this series on Spectrum 2.0, I highlighted just how complex radio spectrum management is and why experts can’t seem to agree on whether we are running out of spectrum or entering an age of abundance.   I finish by saying that the challenge around spectrum management is that still haven’t worked out a very satisfying means of deciding who gets what spectrum and for how long. So let’s look at the two success stories in wireless access: mobile networks and WiFi.  Those are now the two dominant end-user wireless access technologies in the world and they represent two very different models for accessing spectrum:  exclusively licensed versus unlicensed access to spectrum.

Exclusive-Use Spectrum Licenses

hamlet_skull_wireless

Alas poor Yagi…

Exclusive licensing of spectrum is the model that has underpinned the success story of mobile telephony around the world.  Under this model operators are given access to large chunks of spectrum often on a national basis.  Licenses are long-term, typically 15 years, and are renewable.  The long period of the license and the exclusivity of use are the hallmarks of this model.  The advantage of this approach is that it guarantees that the operator’s communication technology will not suffer interference at the hands of other operators/technologies and it also provides a long time window for the operator to deploy their network increasing the likelihood of their becoming profitable. At the time when exclusive-use licenses were first granted for mobile networks in the early 90s in Africa, spectrum was deemed plentiful and exclusive spectrum licenses were typically granted at zero cost to the operator.  This made sense as no one was sure at what rate mobile networks might grow and certainly no one predicted the massive success that they became.  As mobile networks grew and investors saw how potentially profitable mobile networks in Africa could be, demand for spectrum increased and regulators began to struggle with the challenge of how to determine who should be awarded these now extremely valuable spectrum licenses. Even though there was no overall shortage of spectrum, the frequencies defined for GSM mobile use by the ITU and for which manufacturers produce equipment were limited to a few spectrum bands, which directly limited the number of spectrum licenses that could practically be awarded.  Since March of 2005 when The Economist trumpeted the impact that mobile networks were having in Africa, mobile has gone on to define access to communication in Africa in the popular press.

Regulated Unlicensed Spectrum Use

2013-05-13_Wi-FiA very different but equally amazing success story is that of unlicensed spectrum use in the 2.4GHz and 5GHz bands, particularly WiFi communication.  From its humble beginning connecting laptops in cafes, hotels, and airports, WiFi access is now to be found in almost any commercial or public building not to mention its default use in the home as the end point of a broadband connection.  It is estimated that 2.14 billion WiFi chipsets will ship in 2013.  This figure is expected to grow to 3.7 billion in 2017.  We now see WiFi in smartphones, tablets, cameras, printers, even refrigerators and weigh scales.  It has become the default “last inch” technology.  Unexpectedly it has now grown to play a critical role in mobile networks as well as a means of offloading the burgeoning demand for data on mobile devices.  In countries, like the U.K., WiFi accounts for as much as 75% of all smartphone data traffic. Ironically, this has not happened as part of a strategic roll-out of WiFi infrastructure by operators.  WiFi infrastructure has largely grown organically through end-user purchasing of devices.  WiFi was dismissed as too unreliable to be considered serious communication infrastructure.  But it is hard to argue with the evidence.  75% of smartphone data travelling via WiFi is a statistic that demands attention.

ITU – Why You No Like WiFi?

Last year, when the ITU and UNESCO’s Broadband Commission launched their Broadband Report 2012, which purported to chart the future of access particularly in the developing world, I gave them a hard time about the fact that they completely ignored the success and role of unlicensed spectrum.  In this year’s report, they do a bit better and acknowledge not only the role of WiFi in mobile data offload but they also highlight some case studies of rural WiFi access and they acknowledge the potential of the latest frontier of unlicensed access, that of Television White Spaces spectrum.

However, lest anyone in the world of unlicensed spectrum get any ambitious ideas, Dr. Anne Bouverot, Director General of the GSMA, had this to say in the report:

The licensed use of spectrum, on an exclusive basis, is a time-tested approach for ensuring that spectrum users — including mobile operators — can deliver a high quality of service to consumers without interference. As mobile technologies have proliferated, demand for access to radio spectrum has intensified, generating considerable debate and advocacy for new approaches to spectrum management, including proposals for the use of TV ‘white spaces’ and other spectrum-sharing arrangements. While these innovations may find a viable niche in future, pursuit of these options today risks deflecting attention from the release of sufficient, licensed spectrum for mobile broadband [emphasis added].

What Dr Bouverot has to say is problematic on a couple of levels.  One, the fact that exclusive licensing has worked in the past is no indicator that it will be a successful strategy on its own in the future.  Exclusive-use spectrum licensing can, at best, offer a linear increase in spectrum availability but we know that estimates of demand growth are non-linear. Two,  it is disingenuous to depict unlicensed spectrum, such as “white spaces” technology, as something that might actually distract regulators from dealing with demand for licensed spectrum.  There is absolutely no reason for both strategies not to be pursued in parallel.   One only need read the Broadband Report to know that the mobile industry is in absolutely no danger of having attention deflected from its agenda.  Dr. Bouverot goes on to say:

The mobile industry is uniquely positioned to provide widespread broadband service to those who do not yet have it. Citizens around the world are just beginning to reap the true rewards of mobile. Proposals for experimental technologies and attempts to develop new business models risk obscuring the fact that licensed mobile services are the most viable, scalable and best-established model for extending broadband to citizens. Exclusively licensed spectrum for mobile is delivering on the goal of access for everyone, where other technologies fall short, and is providing direct employment and increasing productivity across many sectors. By following best practices in spectrum management, based on proven outcomes, governments around the world will secure a bright future for their citizens through mobile broadband.[emphasis added]

Once again there is a basic assumption that what has worked previously will work in a future where the demand landscape has changed dramatically.  More to the point however is Dr. Bouverot’s claim that mobile operators are best place to provide broadband to those who don’t have it.  If you look at a typical MNOs coverage map in sub-Saharan Africa, you will see that urban centres are generally well-served with 3G and, in some cases, 4G services.  Major roads will typically have GPRS/EDGE services.  The people who aren’t being served are those in sparsely populated rural areas.  The reason they aren’t being served is that MNOs don’t have an economic model for service provision there.  The CAPEX / OPEX models don’t stretch into these areas.  However, there are examples of more nimble local approaches that could offer a more sustainable model for rural access.  These new approaches do not detract from a licensed spectrum approach. They can be explored together.

So What’s The Answer?

Current debates around licensed versus unlicensed spectrum approaches remind me of where the debate on digital intellectual property rights were 12 years ago.  You had a choice between making something open, under a completely open license, or opting for the more closed regime of traditional copyright.  Advocates argued zealously on either side and there didn’t appear to be much middle ground.  We see something similar today with licensed and unlicensed spectrum approaches.

What changed with copyright was the arrival of the Creative Commons as a movement which sort to define a range of options when it comes to copyright.  They were able to break down the element of copyright e.g. right to modify, right to be attributed, etc into understandable chunks and allow people to choose from a palette of rights to decide how to protect their work.  This offered a great deal more flexibility for creators and opened up new avenues for both sharing and business creation.

What we need now is a kind of Creative Commons for spectrum management that establishes a palette of options for how spectrum might be managed ranging from mono-ownership, exclusive rights to spectrum to unlicensed use.  There are numerous ideas that are being explored from geo-location database-driven approaches with ‘white spaces’ technology to lite-licensing to licensed shared access such as is being explored in the European Union.

In fact, we might go further, and re-think the paradigm.  Preston Marshall, one of the authors the PCAST report on spectrum in the U.S. and now with Google, suggests that we abandon the notion of a license being a “right to exclusivity” and move to a license being a “right to protection from interference”.  Indeed protection from interference is the purpose of granting exclusive licenses.  But if protection from interference could be achieved via technological means such as spectrum sensing, geo-location databases, or a combination of these approaches then we might open up the market for rural access to real competition.