Author’s Note: Between my trip to Vienna and my general school work, I’ve run a bit behind on writing blog posts, so for this month I’ll be putting up a few things I wrote for my Smart Cities class, taught by Professor Alenka Poplin (from my various Vienna posts).
As smart city programs become more and more common, it is becoming necessary to understand them and recognize their successes and, perhaps more importantly, their failures. Studying any individual city can give some information, but real understanding comes from comparing these projects to each other. However, comparisons of such complex entities are not easy, and a lot of work has gone into finding ways to compare cities to various ends.
There are any number of reasons to compare smart cities: choosing the best place to live or build a business, finding ideas for new programs (and steps not to take), marketing success and revealing opportunities for growth, or just as an opportunity to academically study how these cities perform. Different needs and biases have necessitated differing comparison methods and results, from simple if opaque rankings to complex categorical analysis to individualized qualitative descriptions. Comparisons also vary in their scope, from massive worldwide studies to granular examinations of small categories.
Rankings
Rankings are perhaps the most common type of comparison. Typically, rankings use a number of indicators, split into groups by related concepts. These indicators can be anything from the number of Apple Stores in a city to the development of a bike-sharing system. These indicators are quantified and statistically manipulated by various regression and weighting techniques to make them comparable, before final rankings are declared.
Rankings are popular due to their simplicity for audiences and their general usefulness for decision-making, but have numerous drawbacks. Since rankings inherently require a simplification to numbers, qualitative information is lost quickly in the process. These comparisons are excellent for information that is easily quantified and available, such as economic figures or road length, or even basic quality of life measures. However, they fail to address specific projects or special qualities that cities might possess. Weighting, choice of indicators and choice of cities also leaves room for bias-and often leads to drastic differences in rankings. Meanwhile, what information is included in the comparison is often passed over in favor of the final ranking, especially by marketing departments for “winning” cities.
IESE Cities in Motion Index (Berrone & Ricart, 2018)
One of the most popular rankings is the IESE Cities in Motion Index (CIMI). Forbes magazine uses CIMI for its listings of smart cities, as do other media sources. CIMI is an annual index now in its fifth edition, including 165 cities across 80 countries. It features 9 dimensions and 83 indicators, measured from many different data sources. The final ranking has perpetually put New York, London, and Paris in the top three, while the rest of the spots tend to jockey around year to year. CIMI also serves as a platform for a number of more detailed extensions.
The breakdown of information includes write-ups and top ten lists for each dimension, net rankings for every city in each dimension, and a handful of short “special case” analyses for notable cities. The end of the report features radar diagrams for each city, although detail for most cities is lacking. The broader Cities in Motion platform includes minibooks about “best practices” and individual case studies. The entire report has excellent visual design, a necessity for a ranking that wants to appeal to media and marketing rather than academia.
However, the CIMI has a number of weaknesses and other issues. One of the most interesting, and indicative of the interests of the authors and audience, is how dimensions are weighted. “Economy” is nearly twice as heavily weighted in the final ranking as all the others except “environment.” Oddly, for something cited as measuring “smart cities” (although the words only appear five times in the report itself), technology is the lowest weighted. There is no justification given for these weightings.
Many indicators also greatly favor large, global cities. Two of the five indicators for “urban planning” are number of buildings and number of high-rise buildings, and “international outreach” includes the number of McDonald’s restaurants per city. “Technology” includes registered users on various social media and the number of Apple Stores. Although some indicators are rated per capita or by percentage, none of these are. This makes it all the more impressive that Reykjavik, one of the smallest cities in the study, placed fifth, thanks to its environmental performance. The highest rated cities, though, perform terribly in some dimensions. New York rates 109thin “social cohesion” and 99thin “environment.” Such spots can be seen throughout the top rankings, especially in social cohesion, while spots of excellent performance pop up in the lower end.
While the CIMI report itself does a good job transmitting at least some of these weaknesses, the real issue comes in reporting. The full report is some 86 pages of complex text and detail, unlikely to be read by the public. The Forbes report, meanwhile, barely goes beyond the top 10, as is standard for reporting rankings.
Qualitative Surveys
In order to provide more detail for smart cities, you have to be descriptive. Rankings take targets, goals, and indicators and ascribe numbers to them, but often lack specific strategic detail or recommendations. A more qualitative survey can take broader ideas and describe cities with them, while still keeping interpretation reasonably simple. These will generally take a number of principles of smart cities and various strategies for achieving, or failing, them. Instead of hard data, they explore what a city is doing to be smart. Examining cities in the context of these can show successful strategies as well as successful cities, as the two tend to go hand-in-hand.
Such surveys are somewhat less popular, likely due to their academic bent and lack of generalized usefulness. It is a lot harder for a city to market specific traits than to claim first place in a ranking. However, they are more useful for actual governance, since they provide specific areas and strategies that may need work for a city, or are existing strengths to build on.
Smart City Characteristics (Angelidou, 2017)
One study that describes smart cities by their characteristics is by Dr. Margarita Angelidou, who makes it her goal to define and review strategies, based on the large amounts of existing literature. Her report focuses on 15 specific programs across 14 countries, and defines 10 characteristics. The characteristics and their defining strategies are straightforward enough that any city’s plan could be pretty comfortable classified by them.
The 10 characteristics each have one or two categories each broken down into a handful of strategies. “Technology, ICTs and the Internet,” for example, includes “tools and technologies” and “applications and e-services.” “Tools and technologies” are broken down further into “data management, public participation, and smart city sectoral applications.” Some characteristics are less about how smart a city is and more about what kind of city it is, such “Locally Adapted Strategies,” which covers whether it is a global city or specially adapted to its local culture and needs. Each characteristic also includes a few paragraphs of description and justification, and all are specific to plans for smart cities, rather than existing or historic city characteristics.
The cities analyzed include some existing cities plans, such as New York Digital City and Smart London Plan, while others are masterplanned greenfield developments, such as Masdar City in Abu Dhabi and PlanIT Valley in Portugal. Some of these communities are unbuilt, and research drawn from only their plans and press. The common factors are that all these cities have strategic plans and solid data to draw on. Each strategy is shortly described in the report.
The final results, rather than being split by city like most rankings, are split by characteristics. Individual strategies are examined and the number of cities subscribing to each is given. However, each program and its related strategies are described for each characteristic in the appendix. The author notes trends in various characteristics, calling attention to alarming trends such as the lack of privacy and quality of life in plans.
Case Studies
To get even more descriptive, comparisons can look closely at individual cities. These studies can look like many things, including narratives, technical descriptions, and high-level theoretical frameworks. Generally, they are much more freeform and less data focused. However, research for such studies tends to be much more active and in depth.
Although case studies for individual cities are not uncommon, ones that can compare a number of cities are somewhat rare. Research for them is difficult, their results can be dense, and they aren’t as functional for many readers. However, they are extremely helpful as data sources for other types of comparisons, especially qualitative surveys.
Smart City Cases (Anthopoulos, 2017)
One such case study report aims to compare the theory of “Smart Utopias” with “Smart Reality.” This study examines 10 cities of various sizes across almost every continent. These include new and existing cities, but all are somewhat famous or capitals of their countries. The research is not really intended to compare, and it makes it difficult to do so, but it provides much more detail on each than other reports.
The research uses four main sources: literature review, official reports from cities, interviews, and walkthroughs of each. Literature and official reports are both theoretical approaches to the smart city, while the in person interviews and walkthroughs provide a window to the reality.
Results are broken down similarly in a table that also notes the presence of specific characteristics such as smart infrastructure. From there, each city gets a summary, covering its scale, how it defines its smartness, and “the fringes,” other notes and conclusions. These include both academic descriptions and first-person narrative sections describing the author’s trips to the cities. Each city gets approximately three-quarters of a page, with lots of detail and specific conclusions drawn from it.
Although it doesn’t act well as a comparison, it does a good job summarizing what defines these smart cities, and how those lessons might be applied to others. One thing worth noting is that many of the cities in question do not self-identify as “smart,” but the author insists that they still succeed at being so.
References
Angelidou, M. (2017). The Role of Smart City Characteristics in the Plans of Fifteen Cities. Journal of Urban Technology, 24(4), 3–28. https://doi.org/10.1080/10630732.2017.1348880
Anthopoulos, L. (2017). Smart utopia VS smart reality: Learning by experience from 10 smart city cases. Cities, 63, 128–148. https://doi.org/10.1016/j.cities.2016.10.005
Berrone, P., & Ricart, J. E. (2018). IESE Cities in Motion Index. https://doi.org/10.15581/018.ST-471