Last month, I traveled to Moldova to speak at a “smart society” summit hosted by the Moldovan national e-government center and the World Bank. I talked about what I’ve been seeing and reporting on around the world and some broad principles for “smart government.” It was one of the first keynote talks I’ve ever given and, from what I gather, it went well: the Moldovan government asked me to give a reprise to their cabinet and prime minister the next day.
I’ve embedded the entirety of the morning session above, including my talk (which is about half an hour long). I was preceded by professor Beth Noveck, the former deputy CTO for open government at The White House. If you watch the entire program, you’ll hear from:
Victor Bodiu, General Secretary, Government of the Republic of Moldova, National Coordinator, Governance e-Transformation Agenda
Dona Scola, Deputy Minister, Ministry of Information Technology and Communication
Andrew Stott, UK Transparency Board, former UK Government Director for Transparency and Digital Engagement
Victor Bodiu, General Secretary, Government of the Republic of Moldova
Arcadie Barbarosie, Executive Director, Institute of Public Policy, Moldova
Without planning on it, I managed to deliver a one-liner that morning that’s worth rephrasing and reiterating here: Smart government should not just serve citizens with smartphones.
I look forward to your thoughts and comments, for those of you who make it through the whole keynote.
On Friday night, a packed room of eager potential entrepreneurs, developers and curious citizens watched US CTO Todd Park and Bill Eggers kick off Startup Weekend DC in Microsoft’s offices in Chevy Chase, Maryland.
Park wants to inject open data as a “fuel” into the economy. After talking about the success of the Health Data Initiative and the Health Datapalooza, he shared a series of websites were aspiring entrepreneurs could find data to use:
Park also made an “ask” of the attendees of Startup Weekend DC that I haven’t heard from many government officials: he requested that if they A) use the data and/or B) if they run into any trouble accessing it, to let him know.
“If you had a hard time or found a particular restful API moving, let me know,” he said. “It helps us improve our performance.” And then he gave out his email address at the White House Executive Office of the President, as he did at SXSW Interactive in Austin in March of this year. Asking the public for feedback on data quality — particularly entrepreneurs and developers — and providing contact information to do so is, to put it bluntly, something every city and state official that has stood up and open data platform could and should be doing. In this context, the US CTO has set a notable example for the country.
Examples of startups, gap filling and civic innovation
Following Park, author and Deloitte consultant Bill Eggers talked about innovative startups and the public sector. I’ve embedded video of his talk below:
Eggers cited three different startups in his talk: Recycle Bank, Avego and Kaggle.
1) The outcome of Recycle Bank‘s influence was a 19-fold increase in recycling in some cities from gamification, said Eggers. The startup now has 3 million members and is now setting its sights on New York City.
2) The real-time ridesharing provided by Avego holds the promise to hugely reduce traffic congestion, said Eggers. According to the stats he cited, 80% of people on the road are currently driving in cars by themselves. Avego has raised tens of millions of dollars to try to better optimize transportation.
3) Anthony Goldbloom found a hole in the big data market at Kaggle, said Eggers, where they’re matching data challenges with data scientists. There now some 19,000 registered data scientists in the Kaggle database.
Eggers cited the success of a competition to map dark matter on Kaggle, a problem that had had millions spent on it. The results of open innovation here were better than science had been able to achieve prior to the competition. Kaggle has created a market out of writing better algorithms.
After Eggers spoke, the organizers of Startup Weekend explained how the rest of the weekend would proceed and asked attendees to pitch their ideas. One particular idea, for this correspondent, stood out, primarily because of the young fellows pitching it:
In 2012, making sense of big data through narrative and context, particularly unstructured data, is now a strategic imperative for leaders around the world, whether they serve in Washington, run media companies or trading floors in New York City or guide tech titans in Silicon Valley.
While big data carries the baggage of huge hype, the institutions of federal government are getting serious about its genuine promise. On Thursday morning, the Obama Administration announced a “Big Data Research and Development Initiative,” with more than $200 million in new commitments. (See fact sheet provided by the White House Office of Science and technology policy at the bottom of this post.)
“In the same way that past Federal investments in information-technology R&D led to dramatic advances in supercomputing and the creation of the Internet, the initiative we are launching today promises to transform our ability to use Big Data for scientific discovery, environmental and biomedical research, education, and national security,” said Dr. John P. Holdren, Assistant to the President and Director of the White House Office of Science and Technology Policy, in a prepared statement.
The research and development effort will focus on advancing “state-of-the-art core technologies” need for big data, harnessing said technologies “to accelerate the pace of discovery in science and engineering, strengthen our national security, and transform teaching and learning,” and “expand the workforce needed to develop and use Big Data technologies.”
In other words, the nation’s major research institutions will focus on improving available technology to collect and use big data, apply them to science and national security, and look for ways to train more data scientists.
“IBM views Big Data as organizations’ most valuable natural resource, and the ability to use technology to understand it holds enormous promise for society at large,” said David McQueeney, vice president of software, IBM Research, in a statement. “The Administration’s work to advance research and funding of big data projects, in partnership with the private sector, will help federal agencies accelerate innovations in science, engineering, education, business and government.”
While $200 million dollars is a relatively small amount of funding, particularly in the context of the federal budget or as compared to investments that are (probably) being made by Google or other major tech players, specific support for training and subsequent application of big data within federal government is important and sorely needed. The job market for data scientists in the private sector is so hot that government may well need to build up its own internal expertise, much in the same way Living Social is training coders at the Hungry Academy.
“Big data is a big deal,” blogged Tom Kalil, deputy director for policy at White House OSTP, at the White House blog this morning.
We also want to challenge industry, research universities, and non-profits to join with the Administration to make the most of the opportunities created by Big Data. Clearly, the government can’t do this on its own. We need what the President calls an “all hands on deck” effort.
Some companies are already sponsoring Big Data-related competitions, and providing funding for university research. Universities are beginning to create new courses—and entire courses of study—to prepare the next generation of “data scientists.” Organizations like Data Without Borders are helping non-profits by providing pro bono data collection, analysis, and visualization. OSTP would be very interested in supporting the creation of a forum to highlight new public-private partnerships related to Big Data.
The White House is hosting a forum today in Washington to explore the challenges and opportunities of big data and discuss the investment. The event will be streamed online in live webcast from the headquarters of the AAAS in Washington, DC. I’ll be in attendance and sharing what I learn.
“Researchers in a growing number of fields are generating extremely large and complicated data sets, commonly referred to as ‘big data,'” reads the invitation to the event from the White House Office of Science and Technology Policy. “A wealth of information may be found within these sets, with enormous potential to shed light on some of the toughest and most pressing challenges facing the nation. To capitalize on this unprecedented opportunity — to extract insights, discover new patterns and make new connections across disciplines — we need better tools to access, store, search, visualize, and analyze these data.”
John Holdren, Assistant to the President and Director, White House Office of Science and Technology Policy
Subra Suresh, Director, National Science Foundation
Francis Collins, Director, National Institutes of Health
William Brinkman, Director, Department of Energy Office of Science
Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn’t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it.
The hot IT buzzword of 2012, big data has become viable as cost-effective approaches have emerged to tame the volume, velocity and variability of massive data. Within this data lie valuable patterns and information, previously hidden because of the amount of work required to extract them. To leading corporations, such as Walmart or Google, this power has been in reach for some time, but at fantastic cost. Today’s commodity hardware, cloud architectures and open source software bring big data processing into the reach of the less well-resourced. Big data processing is eminently feasible for even the small garage startups, who can cheaply rent server time in the cloud.
To learn more about the growing ecosystem of big data tools, watch my interview with Cloudera architect Doug Cutting, embedded below. @Cutting created Lucerne and led the Hadoop project at Yahoo before he joined Cloudera. Apache Hadoop is an open source framework that allows distributed applications based upon the MapReduce paradigm to run on immense clusters of commodity hardware, which in turn enables the processing of massive amounts of big data.
Details on the administration’s big data investments
A fact sheet released by the White House OSTP follows, verbatim:
“National Science Foundation and the National Institutes of Health – Core Techniques and Technologies for Advancing Big Data Science & Engineering
“Big Data” is a new joint solicitation supported by the National Science Foundation (NSF) and the National Institutes of Health (NIH) that will advance the core scientific and technological means of managing, analyzing, visualizing, and extracting useful information from large and diverse data sets. This will accelerate scientific discovery and lead to new fields of inquiry that would otherwise not be possible. NIH is particularly interested in imaging, molecular, cellular, electrophysiological, chemical, behavioral, epidemiological, clinical, and other data sets related to health and disease.
National Science Foundation: In addition to funding the Big Data solicitation, and keeping with its focus on basic research, NSF is implementing a comprehensive, long-term strategy that includes new methods to derive knowledge from data; infrastructure to manage, curate, and serve data to communities; and new approaches to education and workforce development. Specifically, NSF is:
· Encouraging research universities to develop interdisciplinary graduate programs to prepare the next generation of data scientists and engineers;
· Funding a $10 million Expeditions in Computing project based at the University of California, Berkeley, that will integrate three powerful approaches for turning data into information – machine learning, cloud computing, and crowd sourcing;
· Providing the first round of grants to support “EarthCube” – a system that will allow geoscientists to access, analyze and share information about our planet;
Issuing a $2 million award for a research training group to support training for undergraduates to use graphical and visualization techniques for complex data.
Providing $1.4 million in support for a focused research group of statisticians and biologists to determine protein structures and biological pathways.
· Convening researchers across disciplines to determine how Big Data can transform teaching and learning.
Department of Defense – Data to Decisions: The Department of Defense (DoD) is “placing a big bet on big data” investing approximately $250 million annually (with $60 million available for new research projects) across the Military Departments in a series of programs that will:
*Harness and utilize massive data in new ways and bring together sensing, perception and decision support to make truly autonomous systems that can maneuver and make decisions on their own.
*Improve situational awareness to help warfighters and analysts and provide increased support to operations. The Department is seeking a 100-fold increase in the ability of analysts to extract information from texts in any language, and a similar increase in the number of objects, activities, and events that an analyst can observe.
To accelerate innovation in Big Data that meets these and other requirements, DoD will announce a series of open prize competitions over the next several months.
In addition, the Defense Advanced Research Projects Agency (DARPA) is beginning the XDATA program, which intends to invest approximately $25 million annually for four years to develop computational techniques and software tools for analyzing large volumes of data, both semi-structured (e.g., tabular, relational, categorical, meta-data) and unstructured (e.g., text documents, message traffic). Central challenges to be addressed include:
· Developing scalable algorithms for processing imperfect data in distributed data stores; and
· Creating effective human-computer interaction tools for facilitating rapidly customizable visual reasoning for diverse missions.
The XDATA program will support open source software toolkits to enable flexible software development for users to process large volumes of data in timelines commensurate with mission workflows of targeted defense applications.
National Institutes of Health – 1000 Genomes Project Data Available on Cloud: The National Institutes of Health is announcing that the world’s largest set of data on human genetic variation – produced by the international 1000 Genomes Project – is now freely available on the Amazon Web Services (AWS) cloud. At 200 terabytes – the equivalent of 16 million file cabinets filled with text, or more than 30,000 standard DVDs – the current 1000 Genomes Project data set is a prime example of big data, where data sets become so massive that few researchers have the computing power to make best use of them. AWS is storing the 1000 Genomes Project as a publically available data set for free and researchers only will pay for the computing services that they use.
Department of Energy – Scientific Discovery Through Advanced Computing: The Department of Energy will provide $25 million in funding to establish the Scalable Data Management, Analysis and Visualization (SDAV) Institute. Led by the Energy Department’s Lawrence Berkeley National Laboratory, the SDAV Institute will bring together the expertise of six national laboratories and seven universities to develop new tools to help scientists manage and visualize data on the Department’s supercomputers, which will further streamline the processes that lead to discoveries made by scientists using the Department’s research facilities. The need for these new tools has grown as the simulations running on the Department’s supercomputers have increased in size and complexity.
US Geological Survey – Big Data for Earth System Science: USGS is announcing the latest awardees for grants it issues through its John Wesley Powell Center for Analysis and Synthesis. The Center catalyzes innovative thinking in Earth system science by providing scientists a place and time for in-depth analysis, state-of-the-art computing capabilities, and collaborative tools invaluable for making sense of huge data sets. These Big Data projects will improve our understanding of issues such as species response to climate change, earthquake recurrence rates, and the next generation of ecological indicators.”
Further details about each department’s or agency’s commitments can be found at the following websites by 2 pm today:
It was in that context that I presented upon “Open Data Journalism” this morning, which, to paraphrase Jonathan Stray, I’d define as obtaining, reporting upon, curating and publishing open data in the public interest. My slides, which broadly describe what I’m seeing in the world of open government today, are embedded below.
Update: In the context of fauxpen data, beware “openwashing:” Simply opening up data is not a replacement for a Constitution that enforces a rule of law, free and fair elections, an effective judiciary, decent schools, basic regulatory bodies or civil society — particularly if the data does not relate to meaningful aspects of society. Adopting open data and digital government reforms is not quite the same thing as good government, although they certainly can be and are related, in some cases.
If a country launches an open data platform but deprecates freedom of the press or assembly, questions freedom of information laws or restricts the ability of government scientists to speak to the public, is it adopting “open government” — or doing something else?
NYC Hacks and Hackers co-organizer Chrys Wu was kind enough to ask my questions, posed over Twitter. Here were the answers I pulled out from the video above:
How much data has been released? Park: “A ton.” He pointed to HealthData.gov as a scorecard and said that HHS isn’t just releasing brand new data. They’re “also making existing data truly accessible or usable,” he said. They’re taking “stuff that’s in a book or website and turning it into machine readable data or an API.”
What formats? Park: Lots and lots of different formats. “Some people put spreadsheets online, other people actually create open APIs and open services,” he said. “We’re trying to migrate people as much towards open API as possible.”
Impact to date? “The best quantification that I can articulate is the Health data-palooza,” he said. “50 companies and nonprofits updated and deployed new versions of their platforms and services. The data already helping millions of Americans in all kinds of ways.”
Park emphasized that it’s still quite early for the project, at only 18 months into this. He also emphasized that the work isn’t just about data: it’s about how and where it’s used. “Data by itself isn’t useful. You don’t go and download data and slather data on yourself and get healed,” he said. “Data is useful when it’s integrated with other stuff that does useful jobs for doctors, patients and consumers.”
Todd Park, chief technology officer of the Department of Heath and Human Services, has been working to unlock innovation through open health data for over a year now. On many levels, the effort is the best story in federal open data. In the video below, he talks with my publisher, Tim O’Reilly, about collaboration and innovation in the healthcare system.
Several core pillars of federal open government initiatives brought online by the Obama administration may be shuttered by proposed Congressional budget cuts. Data.gov, IT.USASpending.gov, and other five other websites that offer platforms for open government transparency are facing imminent closure. A comprehensive report filed by Jason Miller, executive editor of Federal News Radio, confirmed that the United States of Office of Management and Budget is planning to take open government websites offline over the next four months because of a 94% reduction in federal government funding in the Congressional budget. Daniel Schuman of the Sunlight Foundation first reported the cuts in the budget for data transparency. Schuman talked to Federal News Radio about the potential end of these transparency platforms this week.
Cutting these funds would also shut down the Fedspace federal social network and, notably, the FedRAMP cloud computing cybersecurity programs. Unsurprisingly, open government advocates in the Sunlight Foundation and the larger community have strongly opposed these cuts.
As Nancy Scola reported for techPresident, Donny Shaw put the proposal to defund open government datain perspective at OpenCongress: “The value of data openness in government cannot be overestimated, and for the cost of just one-third of one day of missile attacks in Libya, we can keep these initiatives alive and developing for another year.”
The returns from these e-government initiatives in terms of transparency are priceless. They will help the government operate more effectively and efficiently, thereby saving taxpayer money and aiding oversight. Although we have significant issues with some of these program’s data quality, and we are concerned that the government may be paying too much for the technology, there should be no doubt that we need the transparency they enable. For example, fully realized transparency would allow us to track every expense and truly understand how money — like that in the electronic government fund — flows to federal programs. Government spending and performance data must be available online, in real time, and in machine readable formats.
There is no question that Obama administration has come under heavy criticism for the quality of its transparency efforts from watchdogs, political opponents and media. OMB Watch found progress on open government in a recent report by cautioned that there’s a long road ahead. It is clear that we are in open government’s beta period. The transparency that Obama promised has not been delivered, as Charles Ornstein, a senior reporter at ProPublica, and Hagit Limor, president of the Society of Professional Journalists, wrote today in the Washington Post. There are real data quality and cultural issues that need to be addressed to match the rhetoric of the past three years. “Government transparency is not the same as data that can be called via an API,” said Virginia Carlson, president of the Metro Chicago Information Center. “I think the New Tech world forgets that — open data is a political process first and foremost, and a technology problem second.”
Carlson highlighted how some approaches taken in establishing Data.gov have detracted from success of that platform:
First, no distinction was made between making transparent operational data about how the government works (e.g, EPA clean up sites; medicaid records) and making statistical data more useful (data re: economy and population developed by the major Federal Statistical Agencies). So no clear priorities were set regarding whether it was an initiative meant to foster innovation (which would emphasize operational data) or whether it was an initiative meant to open data dissemination lines for agencies that had already been in the business of dissemination (Census, BLS, etc.), which would have suggested an emphasis on developing API platforms on top of current dissemination tools like American Fact Finder or DataFerrett.
Instead, a mandate came from above that each agency or program was responsible for putting X numbers of data sets on data.gov, with no distinction made as to source or usefulness. Thus you have weird things like cutting up geo files into many sub-files so that the total number of files on data.gov is higher.
The federal statistical agencies have been disseminating data for tens of decades. They felt that the data.gov initiative rolled right over them, for the most part, and there was a definite feeling that the data.gov people didn’t “get it” from the FSA perspective – who are these upstarts coming in to tell us how to release data, when they don’t understand how the FSAs function, how to deal with messy statistical data that have a provenance, etc. An open data session at the last APDU conference saw the beginnings of a conversation between data.gov folks and the APDU folks (who tend to be attached to the major statistical agencies), but there is a long way to go.
Second, individuals in bureaucracies are risk-averse. The political winds might be blowing toward openess now, but executives come and go while those in the trenches stay, (or would like to). Thus the tendency was to find data that was relatively low-risk. Agencies literally culled their catalogs to find the least controversial data that could be released.
Neither technical nor cultural changes will happen with the celerity that many would like, despite the realities imposed by the pace of institutional change. “Lots of folks in the open government space are losing their patience for this kind of thing, having grown accustomed to startups that move at internet speed,” said Tom Lee, director of Sunlight Labs. “But USAspending.gov really can be a vehicle for making smarter decisions about federal spending.”
“Obviously the data quality isn’t there yet. But you know what? OMB is taking steps to improve it, because the public was able to identify the problems. We’re never going to realize the incredible potential of these sites if we shutter them now. A house staffer, or journalist, or citizen ought to be able to figure out the shape of spending around an issue by going to these sites. This is an achievable goal! Right now they still turn to ad-hoc analyses by GAO or CRS — which, incidentally, pull from the same flawed data. But we really can automate that process and put the power of those analyses into everyone’s hands.”
Potential rollbacks to government transparency, if seen in that context, are detrimental to all American citizens, not just for those who support one party or the other. Or, for that matter, none at all. As Rebecca Sweger writes at the National Priorities Project, “although $32 million may sound like a vast sum of money, it is actually .0009% of the proposed Federal FY11 budget. A percentage that small does not represent a true cost-saving initiative–it represents an effort to use the budget and the economic crisis to promote policy change.”
Lee also pointed to the importance of TechStat to open government. TechStat was part of the White House making the IT Dashboard open source yesterday. “TechStat is one of the most concrete arguments for why cutting the e-government fund would be a huge mistake,” he said. “The TechStat process is credited with billions of dollars of savings. Clearly, Vivek [Kundra, the federal CIO] considers the IT Dashboard to be a key part of that process. For that reason alone cutting the e-gov fund seems to me to be incredibly foolish. You might also consider the fact pointed out by NPP: that the entire e-gov budget is a mere 7.7% of the government’s FOIA costs.”
In other words, it costs far more to release the information by the current means. This is the heart of the case for data.gov and data transparency in general: to get useful information into the hands of more people, at a lower cost than the alternatives,” said Lee. Writing on the Sunlight Labs blog, Lee emphasized today that “cutting the e-gov funding would be a disaster.”
The E-Government Act of 2002 that supports modern open government platforms was originally passed with strong bipartisan support, long before before the current president was elected. Across the Atlantic, the British parallel to Data.gov, Data.gov.uk continues under a conservative prime minister. Open government data can be used not just to create greater accountability, but also economic value. That point was made emphatically last week, when former White House deputy chief technology officer Beth Noveck made her position clear on the matter: cutting e-government funding threatens American jobs:
These are the tools that make openness real in practice. Without them, transparency becomes merely a toothless slogan. There is a reason why fourteen other countries whose governments are left- and right-wing are copying data.gov. Beyond the democratic benefits of facilitating public scrutiny and improving lives, open data of the kind enabled by USASpending and Data.gov save money, create jobs and promote effective and efficient government.
Noveck also referred to the Economist‘s support for open government data: “Public access to government figures is certain to release economic value and encourage entrepreneurship. That has already happened with weather data and with America’s GPS satellite-navigation system that was opened for full commercial use a decade ago. And many firms make a good living out of searching for or repackaging patent filings.”
As Clive Thompson reported at Wired this week, public sector data can help fuel jobs, “shoving more public data into the commons could kick-start billions in economic activity.” Thompson focuses on the story of Brightscope, where government data drives the innovation economy. “That’s because all that information becomes incredibly valuable in the hands of clever entrepreneurs,” wrote Thompson. “Pick any area of public life and you can imagine dozens of startups fueled by public data. I bet millions of parents would shell out a few bucks for an app that cleverly parsed school ratings, teacher news, test results, and the like.”
Lee doesn’t entirely embrace this view but makes a strong case for the real value that does persist in open data. “Profits are driven toward zero in a perfectly competitive market,” he said.
Government data is available to all, which makes it a poor foundation for building competitive advantage. It’s not a natural breeding ground for lucrative businesses (though it can certainly offer a cheap way for businesses to improve the value of their services). Besides, the most valuable datasets were sniffed out by business years before data.gov had ever been imagined. But that doesn’t mean that there isn’t huge value that can be realized in terms of consumer surplus (cheaper maps! free weather forecasts! information about which drug in a class is the most effective for the money!) or through the enactment of better policy as previously difficult-to-access data becomes a natural part of policymakers’ and researchers’ lives.
There are a growing number of strong advocates who are coming forward to support the release of open government data through funding e-government. My publisher, Tim O’Reilly, offered additional perspective today as well. “Killing open data sites rather than fixing them is like Microsoft killing Windows 1.0 and giving up on GUIs rather than keeping at it,” said O’Reilly. “Open data is the future. The private sector is all about building APIs. Government will be left behind if they don’t understand that this is how computer systems work now.”
As Schuman highlighted at SunlightFoundation.com, the creator of the World Wide Web, Sir Tim Berners-Lee, has been encouraging his followers on Twitter to sign the Sunlight Foundation’s open letter to Congress asking elected officials to save the data.
What happens next is in the hands of Congress. A congressional source who spoke on condition of anonymity said that they are aware of the issues raised with cuts to e-government finding and are working on preserving core elements of these programs. Concerned citizens can contact the office of the House Majority Leader, Representative Eric Cantor (R-VI) (@GOPLeader), at 202.225.4000.
1. big data: how strengthen capacity to understand massive data?
2. new products: what constitutes high value data?
3. open platforms: what are the policy implications of enabling 3rd party apps?
4. international collaboration: what models translate to strengthen democracy internationally?
5. digital norms: what works and what doesn’t work in public engagement?
In the video below, former White House deputy CTO for open government, Beth Noveck, reflected on what the outcomes and results from the open government R&D summit at the end of the second day. If you’re interested in a report from one of the organizers, you’d be hard pressed to do any better.
The end of the beginning for open government?
The open government R&D summit has since come under criticism from one of its attendees, Expert Labs’ director of engagement Clay Johnson, for being formulaic, “self congratulatory” and not tackling the hard problems that face the country. He challenged the community to do better:
These events need to solicit public feedback from communities and organizations and we need to start telling the stories of Citizen X asked for Y to happen, we thought about it, produced it and the outcome was Z. This isn’t to say that these events aren’t helpful. It’s good to get the open government crowd together in the same room every once and awhile. But knowing the talents and brilliant minds in the room, and the energy that’s been put behind the Open Government Directive, I know we’re not tackling the problems that we could.
Noveck responded to his critique in a comment where she observed that “Hackathons don’t substitute for inviting researchers — who have never been addressed — to start studying what’s working and what’s not in order to free up people like you (and I hope me, too) to innovate and try great new experiments and to inform our work. But it’s not enough to have just the academics without the practitioners and vice versa.”
Justin Grimes, a Ph.D student who has been engaged in research in this space, was reflective after reading Johnson’s critique. “In the past few years, I’ve seen far more open gov events geared towards citizens, [developers], & industry than toward academics,” he tweeted. “Open gov is a new topic in academia; few people even know it’s out there; lot of potential there but we need more outreach. [The] purpose was to get more academics involved in conversation. Basically, government saying ‘Hey, look at our problems. Do research. Help us.'”
Johnson spoke with me earlier this year about what else he sees as the key trends of Gov 2.0 and open government, including transparency as infrastructure, smarter citizenship and better platforms. Given the focus he has put on doing, vs researching or, say, “blogging about it,” it will be interesting to see what comes out of Johnson and Expert Labs next.