Category Archives: Series

Reflecting on “Data” and “Big Data” for Cities

The supercomputer Arctur-1, 2012.

The supercomputer Arctur-1, 2012. By Arctur (Own work) [CC BY-SA 3.0], via Wikimedia Commons.

Part 4 of the series Thriving Cities in a World of Big Data 

Given the rapid pace of city growth and the concurrent demand for better infrastructure and services, pressure on city leaders and managers to make smart policy and planning decisions around investment has never been greater. Limited public budgets, demands for open participatory government, and aging and deteriorating infrastructure also add to the complexities of achieving prosperous, sustainable, resilient, and inclusive cities. This increasingly complex planning environment is driving the demand for data on cities.

The massive collecting and sorting of information known as Big Data is responding to this need and becoming a necessary and useful tool for city leaders. However, in order to create broader knowledge of cities, Big Data must be contextualized and complemented by standardized and comparative city metrics, driven by demand of city leaders themselves. Standardized indicators reported by cities, such as those in the new international standard ISO 37120 Sustainable Development of Communities – Indicators for City Services and Quality of Life are needed to provide a more complete picture of city performance to inform decision making.

ISO 37120 was published in May 2014 by the International Organization for Standardization (ISO). ISO 31720 defines and establishes methodologies for a comprehensive set of indicators that will enable any sized city in a developed or developing economy to track and measure its social, economic, and environmental performance in relation to other cities.

Standardized indicators can help reframe the Bent Flyvbjerg question—“where we are going?”—to “where ought we be going?” For cities, standardized indicators are important for benchmarking, guiding investments, and tracking progress. Cities are positioned to benefit from this type of data precisely because standardization of data enables city-to-city learning and the exchange of best practices, and data also empowers citizens by making them more informed about their city’s service delivery, with the end goal of improving quality of life.

ISO 37120 represents a critical shift in thinking when it comes to city data. It provides cities and stakeholders with a standardized approach and a global framework for third party verification of city data.

Noah Toly points to an example from Anthony Townsend on how Big Data helped to locate residents in Chicago who were vulnerable to extreme weather events. While Toly points out that Big Data may be able to provide this type of information, he argues that it is less likely to inform decision makers on why these people are vulnerable and what should be done to mitigate their risk. This is where standardized metrics can complement Big Data by helping to track the impact and readiness of cities to respond to extreme weather events and other risks. For this reason, in addition to ISO 37120, another standard for indicators on urban resilience is now being considered by the ISO. Risk and resilience indicators reported by cities will complement and in fact inform Big Data on risk from extreme weather events and track how neighborhoods will be impacted.

The first ever certification system and Global Registry for ISO 37120 has been developed by the World Council on City Data (WCCD). The WCCD, launched in May 2014 at the Global Cities Summit in Toronto, has been established to take this critical data agenda forward. The organization is coordinating all efforts on city data to ensure a consistent and comprehensive platform for standardized urban metrics through ISO 37120 and future standards under development.

Back to the question at hand: Big Data can be a tool for government. However, with regards to the concerns that Toly raises, it should not be the only tool. Big Data should be one of many datasets that cities turn to in order to ensure that cities are in fact headed in the right direction when it comes to sustainable planning for the future.

Patricia McCarney is President and CEO of the World Council on City Data (WCCD), and a Professor of Political Science and Director of the Global Cities Institute (GCI) at the University of Toronto.

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

FacebookTwitterGoogle+Share

Harnessing Big Data to Democratic Ends

Seattle skyline at night, 2002. Wikimedia Commons.

Seattle skyline at night, 2002. Wikimedia Commons.

Part 3 of the series Thriving Cities in a World of Big Data 

It’s easy to be afraid of Big Data. (Like Big Brother or Big Tobacco, it’s coming for us).  It’s even easier to be excited about it. As Noah Toly noted in his recent posts on Common Place, the same qualities of Big Data can inspire both utopian dreams and dystopian fears. But what can Big Data do for–or to–democracy?

In 1992, the United Nations Conference on Environment and Development (UNCED) called for governments and NGOs to “develop and identify indicators of sustainable development in order to improve the information basis for decision-making at all levels.” The hope was to craft new, large-scale statistical resources that would help assess and craft policies. The UNCED’s goal was twofold: first, to “bridge the data gap” that exists at all levels of government on key environmental and economic issues; second, to improve “information availability” in order to assure that data be accessible to all decision makers and managed securely and openly.  UNCED hoped not only to improve elite decision making but also to democratize sustainable development practices.

The data the UNCED proposed to track would serve democracy in a “broad sense, ” helping  individuals and institutions at both the international and grassroots levels to engage with the pressing questions of our time. But while obviously helpful to the functioning of democratic societies, these large-scale statistical measures also present problems for them.  For example, elites can use such data to support what  appear to be their interests alone—a truth borne out by metrics such as GDP, which can be used to monitor and inform the economic power of the wealthy without reflecting the well-being of the population as a whole.

Indeed, assorted measurements and data have long been used by governments and political elites to  control populations, going back to the first efforts by monarchs to require  thorough census data on their  subjects. As political scientist and anthropologist James Scott shows in Seeing Like a State, the transparency that statistical measures give to complex political phenomena can also make citizens more “legible” to, and thus controllable by, political elites. Big Data thus not only abets surveillance, but can also also bring the politically “illegible” into the fold by forcing their normalization.

Then, too, Big Data can be used to assert the sufficiency of statistical fact, thereby sometimes curtailing robust or fully informed demoractic debate. Take recent partisan arguments  pitting economic stimulus against austerity. Neither side is prepared to engage in a conversation over the data itself. Both sides claim to be in possession of the facts, the left asserting that stimulus will lead to sustained economic growth, the right that austerity is the only route to the same destination. In this case, as in others, conversations beginning with the assertion of absolute facts tend to end either in stalemate (as in recent debates about the federal budget in America) or with technocracy, where the statistician is favored over and against the popular will of people (as with Italy’s Monti government).

Can we have the benefits of Big Data without the drawbacks? Is there a way to harness the democratic power of information  while also promoting democratic open-mindedness and popular empowerment? The work of geographer Meg Holden, who studied the development and implementation of a regional environmental impact index called Sustainable Seattle (S2), is useful here. Holden’s study of S2 shows how  complex phenomena such as urban sustainability and climate change can be made subjects of political debate through statistical measures.

Holden shows that grassroots attention to indicator development and application allowed the S2 project to bridge existing learning gaps between local politics and dispersed economic, ecological, cultural, and institutional phenomena. Rather than shifting knowledge of large-scale phenomena outside democratic debate, S2 promoted “social learning.” Residents of Seattle could (and did) use its findings to promote better  democratic debates.

The demands S2 placed on its developers were many. Among other things, they had to become statistical experts in indicator development while finding measurements that meaningfully correlated to ecological questions. They had to be marketers who could advertise their project and findings to their community. And finally, they had to lobby in support of their findings in order to have an impact on local politics. While Holden shows that S2 was imperfect, the responsiveness of its developers to broad public concerns makes it a model for those hoping to  to harness Big Data for democratic ends.

Above all, the S2 project leaders recognized the limits of Big Data. Its developers acknowledged the imperfect nature and sources of data, the limitations inherent in its processing, the necessity of packaging findings, and the need to bring findings to all audiences and institutions. In the long run, such chastened optimism and humility may prove to be the most helpful lessons  of all.

Callum Ingram is a graduate student in the Department of Politics at the University of Virginia. His dissertation research focuses on the use of urban space and architecture by democratic social movements.

Editor’s note: For more on this topic, subscribe to receive the spring issue of The Hedgehog Review, “Too Much Information.”

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.

The City of Big Data—Is It Enough?

Part 1 of the series Thriving Cities in a World of Big Data 

During 2013, 6,922,484 tweets originated in Chicago. That same year, members of the city’s Divvy bike-sharing program logged 750,000 rides and 633,647 callers reported abandoned vehicles, potholes, and rodent infestation through the city’s 311 program. These numbers represent just a fraction of all of the data collected in, by, and about Chicago last year. So I learned at the Chicago Architecture Foundation’s exhibition, “Chicago: City of Big Data,” which allows visitors to discover the role of information technology in cities by interacting with data on urban life at the individual, block, and city levels.

2009-09-18 3060x1020 chicago skyline.jpg
2009-09-18 3060×1020 chicago skyline” by J. Crocker – J. Crocker. Via Wikimedia Commonshttp://commons.wikimedia.org/wiki/File:2009-09-18_3060x1020_chicago_skyline.jpg

During my visit, I also explored my “personal data trail,” a record of online purchases, social media engagement, and movement. After analyzing my average monthly bike-sharing use, music and movie streaming, online purchases, and social media use, a computer told me that I am a “Power User” whose data trail is highly valued by public and private interests alike: “You are the Alpha of the data pack. You rely on your digital devices, generating and consuming data likes it’s your job. Much of your data trail is visible to anyone who’s interested.”

As it happens, a lot of people are interested. In terms of marketing, my data is worth about five hundredths of a penny. That may not sound like much, but to public agencies in Chicago, my data aggregated with that of others can help improve the delivery of city services. It turns out that tweets and bike-share rides are rich in information that can help city agencies become more responsive and efficient.

The Array of Things

Visitors to the exhibit also saw examples of data collection infrastructure specially manufactured for the urban environment. The Array of Things is a network of sensors shielded by hardened enclosures, wrapped in protective composite tubing, and attached to city light posts. Touted as a “new urban infrastructure,” the AoT may prove to be as vital to a thriving urban future as highways, bridges, and power supplies. When completed, the AoT will collect data on temperature, humidity, light, precipitation, wind, vibration, carbon monoxide, carbon dioxide, nitrogen dioxide, ozone, volatile organic compounds, sulfur oxides, and particulate matter. Along with these environmental indicators, the sensors will analyze ambient noise and the number of Bluetooth and WiFi-enabled mobile devices within a certain radius. Eventually, the data could be used to target the application of road salt during winter weather and to help pedestrians identify the safest routes home.

“Chicago: City of Big Data” also demonstrated how city planners and architects combine “sophisticated computer models with growing pools of data” to discover and elucidate patterns that might otherwise remain unknown or opaque. These patterns can inform urban design and policy, as well as assist in building more sustainable, equitable, and efficient cities. According to the 2014 Chicago Council on Global Affairs Emerging Leaders Perspectives report, The Emerging Power of Big Data: The Chicago Experience, big data can lead to reliable and sustainable energy, predictable and efficient public transit, successful and accountable education, and targeted and effective law enforcement.

The use of data in urban design, planning, and policy is nothing new. In Chicago, especially, data collection has a long history. In the months leading up to the 1893 World’s Fair: Columbian Exposition, architect and urban designer Daniel Burnham, the fair’s lead planner, collected vast amounts of information regarding land and water-based transportation in an effort to make the fair as accessible and profitable as possible. In every corner of the world, cities have used data for planning and design. How tall must bridges be to accommodate boat traffic, how many high-school students graduate in a certain district? What are the links between poverty, food insecurity, and homelessness? How many tourists vacation in the city? How many pilgrims visit and venerate a holy site? Cities use such information to learn from the past, respond to the present, and plan for the future.

So what is new about the use of data in urban planning, policy, and design? Unlike in the past, we now have access to an unprecedented stream of data; information that is rapidly available and of mind-boggling variety. We also now have the ability to study massive data pools for relationships that hadn’t previously been visible. In addition, we can employ tools such as cluster analysis and other data mining techniques to discern patterns that we didn’t even know to look for before. Beyond the size, depth, and range of data, we now have even greater expectations of what we can learn from it. For some, the goal of Big Data is to make the whole city more responsive, giving us instant access to information we can hold in one hand, turning the city into a device. By harnessing big data, we suppose we may harness the power of urbanization and the city itself—a force with few rivals in history.

The City of the Big Shoulders

It is telling that the exhibition title, “City of Big Data,” riffs on one of Chicago’s most famous nicknames. Carl Sandburg dubbed Chicago the “City of the Big Shoulders” in his eponymous 1916 poem. Sandburg opens by addressing the city directly:

“Hog Butcher for the World
Tool Maker, Stacker of Wheat,
Player with Railroads and the Nation’s Freight Handler;
Stormy, husky, brawling,
City of the Big Shoulders”

In its industrial heyday, the “City of the Big Shoulders” symbolized the gritty determination required to build a metropolitan giant from what author Donald L. Miller described as a “pestilential swamp.” Chicago has always had a reputation for swagger and resilience, returning even better after the devastating 1871 fire and outpacing competitors to land the 1893 World’s Fair. Becoming a railroad hub and transfer point for people and goods moving from the west to the east further enhanced the city’s fortunes—and its reputation. Sandburg captured Chicago’s confidence:

“Bareheaded
Shoveling,
Wrecking
Planning,
Building, breaking, rebuilding,
Under the smoke, dust all over his mouth, laughing with white teeth,
Under the terrible burden of destiny laughing as a young man laughs,
Laughing even as an ignorant fighter laughs who has never lost a battle,
Bragging and laughing that under his wrist is the pulse, and under his ribs the heart of the people,
Laughing!”

Sandburg sees Chicago as an ignorant young fighter who hadn’t yet tasted defeat—not a city that knows it can do anything, but a city that doesn’t yet know what it can’t do. What he identifies in the poem is a kind of naïveté that can be inspiring for cities, but one that can also lead to over-confidence and a tendency to overlook urban problems and limitations. After all, for all its industrial success, the City of the Big Shoulders has always been plagued by many forms of social ills.

Likewise, the City of Big Data may also experience over-confidence. This exhibition invited us to consider whether or not Big Data is any more promising than, so to speak, Big Shoulders. Is data-mining the key to flourishing urban communities? What, if anything, might Big Data contribute to a thriving urban future? And if Big Data alone is not enough, what more do we need?

Noah Toly currently serves as Director of Urban Studies and Associate Professor of Politics & International Relations at Wheaton College. He has taught various courses in environmental politics and policy, urban politics, and ethics. He is also a member of the Thriving Cities Project steering committee.

 

. . . . . . . .

Like The Hedgehog Review on Facebook, follow us on Twitter, and subscribe to our posts via RSS.