media

African News Challenge funds data journalism and open government tech

The post-industrial future of journalism is already here. It’s just not evenly distributed yet. The same trends changing journalism and society have the potential to create significant social change throughout the African continent, as states moves from conditions of information scarcity to abundance.

That reality was clear on my recent trip to Africa, where I had the opportunity to interview Justin Arenstein at length during my visit to Zanzibar. Arenstein is building the capacity of African media to practice data-driven journalism, a task that has taken on new importance as the digital disruption that has permanently altered how we discover, read, share and participate in news.

One of the primary ways he’s been able to build that capacity is through African News Innovation Challenge (ANIC), a variety of the Knight News Challenge in the United States.

The 2011 Knight News Challenge winners illustrated data’s ascendance in media and government, with platforms for data journalism and civic connections dominating the field.

As I wrote last September, the projects that the Knight Foundation has chosen to fund over the last two years are notable examples of working on stuff that matters: they represent collective investments in digital civic infrastructure.

The first winners of the African News Innovation Challenge, which concluded this winter, look set to extend that investment throughout the continent of Africa.

“Africa’s media face some serious challenges, and each of our winners tries to solve a real-world problem that journalists are grappling with. This includes the public’s growing concern about the manipulation and accuracy of online content, plus concerns around the security of communications and of whistleblowers or journalistic sources,” wrote Arenstein on the News Challenge blog.

While the twenty 2012 winners include investigative journalism tools and whistleblower security, there’s also a focus on citizen engagement, digitization and making public data actionable. To put it another way, the “news innovation” that’s being funded on both continents isn’t just gathering and disseminating information: it’s now generating data and putting it to work in the service of the needs of residents or the benefit of society.

“The other major theme evident in many of the 500 entries to ANIC is the realisation that the media needs better ways to engage with audiences,” wrote Arenstein. “Many of our winners try tackle this, with projects ranging from mobile apps to mobilise citizens against corruption, to improved infographics to better explain complex issues, to completely new platforms for beaming content into buses and taxis, or even using drone aircraft to get cameras to isolated communities.”

In the first half of our interview, published last year at Radar, Arenstein talked about Hacks/Hackers, and expanding the capacity of data journalism. In the second half, below, we talk about his work at African Media Initiative (AMI), the role of open source in civic media, and how an unconference model for convening people is relevant to innovation.

What have you accomplished at the AMI to date?

Justin Arenstein: The AMI has been going on for just over three years. It’s a fairly young organization, and I’ve been embedded now for about 18 months. The major deliverables and the major successes so far have been:

  1. A $1 million African News Innovation Challenge, which was modeled fairly closely on the Knight Challenge, but a different state of intended outputs.
  2. A network of Hacks/Hackers chapters across the continent.
  3. A number of technology support or technology development initiatives. Little pilot projects, invariably newsroom-based.

The idea is that we test ideas that are allowed to fail. We fund them in newsrooms and they’re driven by newsrooms. We match them up with technologists. We try and lower the barrier for companies to start experimenting and try and minimize risk as much as possible for them. We’ve launched a couple of slightly larger funds for helping to scale some of these ideas. We’ve just started work on a social venture or a VC fund as well.

You mentioned different outputs in the News Challenge. What does that mean?

Justin Arenstein: Africa hasn’t had the five-year kind of evolutionary growth that the Knight News Challenge has had in the U.S. What the News Challenge has done in the U.S. is effectively grown an ecosystem where newsrooms started to grapple with and accepted the reality that they have to innovate. They have to experiment. Digital is core to the way that they’re not only pushing news out but to the way that they produce it and the way that they process it.

We haven’t had any of that evolution yet in Africa. When you think about digital news in African media, they think you’re speaking about social media or a website. We’re almost right back at where the News Challenge started originally. At the moment, what we’re trying to do is raise sensitivity to the fact that there are far more efficient ways of gathering, ingesting, processing and then publishing digital content — and building tools that are specifically suited for the African environment.

There are bandwidth issues. There are issues around literacy, language use and also, in some cases, very different traditions of producing news. The output of what would be considered news in Africa might not be considered news product in some Western markets. We’re trying to develop products to deal with those gaps in the ecosystem.

What were the most promising News Challenge entrants that actually relate to those outputs?

Justin Arenstein: Some of the projects that we thought were particularly strong or apt amongst the African News Challenge finalists included more efficient or more integrated ways to manage workflow. If you look at many of the workflow software suites in the north, they’re, by African standards, completely unaffordable. As a result, there hasn’t been any systemic way that media down here produced news, which means that there’s virtually no way that they are storing and managing content for repackaging and for multi-platform publishing.

We’re looking at ways of not reinventing a CMS [content management system], but actually managing and streamlining workflow from ingesting reporting all the way to publishing.

Some of the biggest blogs in the world are running on WordPress for a CMS. Why not use that where needed?

Justin Arenstein: I think I may have I misspoken by saying “content management systems.” I’m referring to managing, gathering and storing old news, the production and the writing of new content, a three or four phase editing process, and then publishing across multiple platforms. Ingesting creative design, layout, and making packages into podcasting or radio formats, and then publishing into things like Drupal or WordPress.

There have been attempts to take existing CMS systems like Drupal and turn it into a broader, more ambitious workflow management tool. We haven’t seen very many successful ones. A lot of the kinds of media that we work with are effectively offline media, so these have been very lightweight applications.

The one thing that we have focused on is trying to “future-proof” it, to some extent, by building a lot of meta tagging and data management tools into these new products. That’s because we’re also trying to position a lot of the media partners we’re working with to be able to think about their businesses as data or content-driven businesses, as opposed to producing newspapers or manufacturing businesses. This seems to be working well in some early pilots we’ve been doing in Kenya.

What were your takeaways from the Tech Camp? Was a hybrid unconference a good model for the News Challenge?

Justin Arenstein: A big goal that we think we’ve achieved was to try and build a community of use. We put people together. We deliberately took them to an exotic location, far away from a town or location, where they’re effectively held hostage in a hotel. We built in as much free time as possible, with many opportunities to socialize, so that they start creating bonds. Right from the beginning, we did a “speed dating” kind of thing. There’s been very few presentations — in fact, there was only one PowerPoint in five days. The rest of the time, it’s actually the participants teaching each other.

We brought in some additional technology experts or facilitators, but they were handpicked largely from previous challenges to share the experience of going through a similar process and to point people to existing resources that they might not be aware of. That seems to have worked very well.

On the sidelines of the Tech Camp, we’ve seen additional collaborations happen for which people are not asking for funding. It just makes logical sense. We’ve already seen some of the initial fruits of that: three of the applicants actually partnered and merged their applications. We’ve seen a workflow editorial CMS project partner up with an ad booking and production management system, to create a more holistic suite. They’re still building as two separate teams, but they’re now sharing standards and they’re building them as modular products that could be sold as a broader product suite.

The Knight News Challenge has stimulated the creation of many open source tools. Is any of that code being re-used?

Justin Arenstein: We’ve tried to tap into quite a few of them. Some of the more recent tools are transferable. I think there was grand realization that people weren’t able to deliver on their promises — and where they did deliver on tools, there wasn’t documentation. The code was quite messy. They weren’t really robust. Often, applications were written for specific local markets or data requirements that didn’t transfer. You actually effectively had to rebuild them. We have been able to re-purpose DocumentCloud and some other tools.

I think we’ve learned from that process. What we’re trying to do with our News Challenge is to workshop finalists quite aggressively before they put in their final proposals.

Firstly, make sure that they’re being realistic, that they’re not unnecessarily building components, or wasting money and energy on building components for their project that are not unique, not revolutionary or innovative. They should try and almost “plug and play” with what already exists in the ecosystem, and then concentrate on building the new extensions, the real kind of innovations. We’re trying to improve on the Knight model.

Secondly, once the grantees actually get money, it comes in a tranche format so they agree to an implementation plan. They get cash, in fairly small grants by Knight standards. The maximum is $100,000. In addition, they get engineering or programming support from external developers that are on our payroll, working out of our labs. We’ve got a civic lab running out of Kenya and partners, such as Google.

Thirdly, they get business mentorship support from some leading commercial business consultants. These aren’t nonprofit types. These are people who are already advising some of the largest media companies in the world.

The idea is that, through that process, we’re hopefully going to arrive at a more realistic set of projects that have either sustainable revenue models and scaling plans, from the beginning, or built-in mechanisms for assessments, reporting back and learning, if they’re designed purely as experiments.

We’re not certain if it’s going to work. It’s an experiment. On the basis of the Tech Camp that we’ve gone through, it seems to have worked very well. We’ve seen people abandon what were, we thought, overly ambitious technology plans and rather matched up or partnered with existing technologists. They will still achieve their goals but do so in a more streamlined, agile manner by re-purposing existing tech.

Editors’s Note: This interview is part of an ongoing series at the O’Reilly Radar on the people, tools and techniques driving data journalism.

Are “Commons 2.0″ and participatory urbanism hype or hope?

“…armed with low-cost phones and an Internet connection, people are using civic-minded apps like ForageCity to tackle everything from public safety to potholes. The question is whether the technology will have the long-term effect that some foresee, or whether the “commons 2.0″ and “participatory urbanism” will become empty marketing slogans.”

-Angela Woodall, writing in the Oakland Tribune about a new mobile application from Oakland’s Youth Radio that is designed to help people redistribute extra fruit and vegetables to people in need.

Forage City app

[Image Credit: Susan Mernit]

Woodal asks good questions and, as it happens, posed them to me last week in a phone interview. (I’m quoted in the article.)

Here’s a couple of thoughts that didn’t make it in. Mobile applications that civic developers are creating around the world — like ForageCity — are making it increasingly possible for more people to interact more easily and for less cost where ever and whenever they wish. That does lead to giving more power to more people to connect to one another and solve problems, or at least discuss them.

The potential for such apps to connect and, crucially, scale is particularly significant when there is a shared standard for the open government data that fuels, as with the standard for transit data (GTFS) that now exists in 450 different cities. Around the U.S., cities are slowly working with one another to define more such standards — but it’s a complicated process that doesn’t happen overnight, or even years.

The question is whether the technology will have the long-term effect that Code for America founder Jen Pahlka described to Woodall. On that count, I tend to give Pahlka — and my publisher, Tim O’Reilly — the benefit of the doubt.

As I said to the reporter, the potential for civic apps is enormous — but these the tools are only as good as the people who use them and adapt them. The tools can be quite good on their own — full stop — but many network effects will only take place with broad, mainstream adoption.

Smartphones can now be used for finding shelter, improving medical care and documenting riots — but the same devices are also used for gaming, pornography, celebrity gossip and shopping. While the apps used to find city services are generally not the ones used to surveil citizens, in practice the mobile device itself may be an agent of both actions.

Working out how to both protect the rights of citizens and empower citizens using mobile devices will be a difficult and crucial need in the years ahead.

It’s not immediately clear, at least to this observer, that state governments, Congress, regulators and law enforcement are up to the challenge, but it’s hard not to hope that they rise to the challenge.

Gov 2.0 goes mainstream with a new Associated Press article on open government data

We live in interesting times. Last week, NPR listeners learned about “local Gov 2.0.” This weekend, civic applications and open data emerged further into the national consciousness with a widely syndicated new Associated Press story by Marcus Wohlsen, who reported that a “flood of government data fuels rise of city apps. Here’s how Wohlsen describes what’s happening:

Across the country, geeks are using mountains of data that city officials are dumping on the Web to create everything from smartphone tree identifiers and street sweeper alarms to neighborhood crime notifiers and apps that sound the alarm when customers enter a restaurant that got low marks on a recent inspection. The emergence of city apps comes as a result of the rise of the open data movement in U.S. cities, or what advocates like to call Government 2.0.”

The AP covered Gov 2.0 and the open government data movement in February, when they looked at how cities were crowdsourcing ideas from citizens, or “citizensourcing.”

It’s great to see what’s happening around the country get more mainstream attention. More awareness of what’s possible and available could lead to more use of the applications and thereby interest and demand for civic data. For instance, on the @AP’s Twitter feed, an editor asked more than 634 “Hundreds of new apps use public data from cities to improve services. Have you tried any?”

Wohlson captures the paradigm behind Gov 2.0 well at the end of his article:

“New York, San Francisco and other cities are now working together to develop data standards that will make it possible for apps to interact with data from any city. The idea, advocates of open data say, is to transform government from a centralized provider of services into a platform on which citizens can build their own tools to make government work better.

Open311 and GTFS are data standards of this sort. What lies ahead for Gov 2.0 in 2012 has the potential to improve civic life in any number of interesting ways. I look forward to sharing that journey.

Open Government MAGIC: Media Access to Government Information Conference

The right of the governed to gain access to information about their government is a core pillar of the compact between “We the People” in the United States and those they elect to office. The quality, breadth and depth of that access, however, is often troubled.

Today in Washington, the Media Access to Government Information Conference (MAGIC) will explore these issues from within the august halls of the National Archives. MAGIC is a collaborative, one-day conference sponsored by the National Archives and Records Administration (NARA) and Duke University’s DeWitt Wallace Center for Media and Democracy. The primary focus of the conference is to highlight how journalists and others writing about public affairs can gain better access to government records by journalists. A liveblog of the proceedings, agenda and associated papers are embedded below:

Program and Papers

9:00-9:20 Welcome by David S. Ferriero, Archivist of the United States, NARA; Sanford Ungar, President Goucher College, and Member, Public Interest Declassification Board

9:20-10:30 Session 1: Media Access to Federal Government Records

Journalists and NGO participants on this panel will address how FOIA and access to federal records might be re-tooled as the federal government implements its open government and transparency policies. Government panelists will describe their vision for how new policies and technologies are changing access to government records. Additional topics may include:

  • Institutionalizing the release of common records used to monitor agency activity rather than waiting for FOIA requests to come in;
  • Centralizing, updating, and documenting information systems on agency FOIA websites; and
  • Building openness into administrative (records collecting) systems that are eventually released to the public.

Moderator: Irene Wu, Director of Research, SAND-MNIA International Bureau, FCC

  • Gary Bass, Founder and Executive Director, OMB Watch;
    (Paper)
  • Sarah Cohen, Knight Professor of the Practice of Journalism and Public Policy, Duke University;

    (Comments)
  • William Kammer, Chief, FOIA Division, U.S. Department of Defense, and Vice President, American Society of Access Professionals;
  • Miriam Nisbet, Director, Office of Government Information Services (OGIS), NARA
    (Paper)

10:30-10:45 Morning Break

10:45-Noon Session 2: Technical Hurdles, Research Solutions

Journalists on the panel will identify specific technical problems in dealing with government records at federal, state, local, and tribal levels. Government officials will identify specific technical solutions or research agendas to find solutions to these problems. Additional topics may include:

  • Re-tooling internal government information systems to improve the quality of records release;
  • Government agency support of research to improve the mining and analyzing of documents not born digital, handwritten responses on forms, and audio/video of government proceedings; and
  • Insights into emerging technologies and cyber infrastructure that may facilitate media access to government records.

Moderator: Robert Chadduck, Acting Director, National Archives Center for Advanced Systems and Technologies (NCAST), NARA

  • David Donald, Data Editor, Center for Public Integrity
    (Comments)
  • Richard Marciano, Professor and Director @ Sustainable Archives and Leveraging Technologies group, UNC School of Information and Library Science
  • George Strawn, Director, National Coordination Office, Networking and Information Technology Research and Development (NITRD) Program
  • Ken Thibodeau, Former Director (Retired), National Archives Center for Advanced Systems and Technologies (NCAST)
  • Derek Willis, Web developer, New York Times
    (Comments)

Noon-1:30 Luncheon

1:30-2:45 Session 3: Access to State, Local, and Tribal Government Records

Journalists on this panel will identify issues that arise frequently in seeking records at state, local, and tribal levels. Government panelists will discuss possible solutions to making these records more easily available, and how different levels of government may leverage IT to improve access to records. Additional topics may include:

  • Types of records sought at state, local, and tribal level;
  • Special challenges in variations in open access policies across states and localities; and
  • Federal funds expenditure rules that might trigger more transparency at state and local level.

Moderator: David McMillen, NARA External Affairs Liaison

2:45-3:15 Afternoon Break

3:15-4:30 Session 4: Private Sector Actions

NGO participants will discuss how they work to improve access to records, including participation in discussions to retool government records systems for better access by journalists. Additional topics may include:

  • What transparency advocates, journalism organizations, foundations, and academics could do to support access policies; and
  • The development of tools to aid in the analysis of government records.

Moderator: James Hamilton, Director, DeWitt Wallace Center for Media and Democracy, Duke University

  • Bill Allison, Editorial Director, Sunlight Foundation
  • Rick Blum, Coordinator, The Sunshine in Government Initiative
    (Paper)
  • Danielle Brian, Executive Director and Project on Government Oversight
    Bryan Rahija, Blog Editor, Project on Government Oversight
    (Paper)
  • Charles Lewis, Executive Editor, Investigative Reporting Workshop and Professor, School of Communication, American University

Debating the meaning of WikiLeaks, the Internet and Democracy

On January 21st, a panel at the Churchill Club in San Jose, California debated the meaning of WikiLeaks and its broader impact to the Internet and democracy.

The issue of how Wikileaks relates to open government, Internet freedom, free speech and the use of technology in government is going to continue to be hotly debated in 2011. This conversation lays out some of the core issues, from the perspective of media analysts, the academy, technologists and, crucially, a man who was at the heart of the release of one of the most important releases of classified government information in the history of the United States.

It’s worth a watch.

The panel was moderated by Paul Jay, CEO and Senior Editor, The Real News Network, and included the following speakers:

  • Daniel Ellsberg, Former State and Defense Dept. Official prosecuted for releasing the Pentagon Papers
  • Clay Shirky, Independent Internet Professional; Adjunct Professor, Interactive Telecommunications Program, New York University
  • Neville Roy Singham, Founder and Chairman, ThoughtWorks
  • Peter Thiel, Technology entrepreneur, investor, and philanthropist
  • Jonathan Zittrain, Professor of Law and Professor of Computer Science, Harvard University; Co-founder, Berkman Center for Internet & Society

Social media, local government and elections: reflections on COGEL and @DCBOEE

This week, I was proud to be one of two speakers for a session on social media and government at the Council on Governmental Ethics Laws (COGEL) conference in Washington, D.C. I delivered an adapted version of the talk on social media and government I gave the Social Security Administration’s Open Government Awareness Day earlier this year, focusing on the elements that would be of greatest interest to a group of lawyers, regulators and academics. The presentation is embedded below:

The speaker that followed me, however, was able to share a fascinating view of what social media looks like from inside of government, specifically in the District of Columbia. Alysoun McLaughlin, the public affairs manager for the District of Columbia Board of Elections and Ethics. Here’s her bio, from the COGEL session description:

She joined the District last year, just in time to implement a long list of reforms for the 2010 election including new voting equipment, early voting and same-day registration. Prior to becoming an election official, she was a project manager for Election Initiatives at the Pew Center on the States. She previously spent a decade as a Washington lobbyist, focusing on election issues for the National Conference of State Legislatures and the National Association of Counties. She is here today to share her experience with social media during the 2010 election.

And share she did. Over the course of half an hour, she talked about Facebook, Twitter, local media, citizen engagement and much more. I captured most of her presentation on my iPhone (sorry about the unsteady hand) and have embedded her presentation, “To Tweet or not to Tweet: Engaging the Public through Social Media,” below.

If you want an excellent, practical perspective of the local government side of social media, these are worth watching. A couple of key takeaways from her presentation:

  • How can governments get insights from Twitter without using it? “Just type in the name of your agency and see what they’re saying.”
  • On D.C. elections: “We know there are going to be lines. Come to the website to see what they are.”
  • Don’t trust this to an intern. You “need someone skilled in crisis communications.”
  • “The days that I’m heavy on Twitter are the days my phone rings less.”
  • Viral tweets can raise awareness: “…and we just confirmed that a voter used a write-in stamp. on a touch screen.”

Part 1: Introductions

Part 2: Reflections on Twitter and Facebook

Part 3: Twitter and the 2010 DC Election

Part 4: Who follows @DCBOEE

Part 5: Listening and using social media in government

Is Wikileaks open government?

Aeschylus wrote nearly 2,500 years ago that in war, “truth is the first casualty.” His words are no doubt known to another wise man, whose strategic “maneuvers within a changing information environment” would not be an utterly foreign concept to the Greeks in the Peloponnesian War. Aeschylus and Thucydides would no doubt wonder at the capacity of the Information Age to spread truth and disinformation alike. In November 2010, it’s clear that legitimate concerns about national security must be balanced with the spirit of open government expressed by the Obama administration.

The issues created between Wikileaks and open government policies are substantial. As Samantha Power made clear in her interview on open government and transparency: “There are two factors that are always brought to bear in discussions in open government, as President Obama has made clear from the day he issued his memorandum. One is privacy, one is security.”

As the State Department made clear in its open letter to Wikileaks, the position of the United States government is that the planned release of thousands of diplomatic cables by that organization today will place military operations, diplomatic relationships and the lives of many individuals at risk.

As this post went live, the Wikileaks website is undergoing a massive distributed denial of service (DDoS) attack, though the organization’s Twitter account is far from silenced. A tweet earlier on Sunday morning noted that “El Pais, Le Monde, Speigel, Guardian & NYT will publish many US embassy cables tonight, even if WikiLeaks goes down.”

In fact, Wikileaks’ newest leak, through the early release of Der Spiegel, had long since leaked onto Twitter by midday. Adrien Chen’s assessment at Gawker? “At least from the German point of view there are no earth-shattering revelations, just a lot of candid talk about world leaders.”

The New York Times offered a similar assessment in its own report on Wikileaks, Cables Shine Light Into Secret Diplomatic Channels: “an unprecedented look at backroom bargaining by embassies around the world, brutally candid views of foreign leaders and frank assessments of nuclear and terrorist threats.”

The Lede is liveblogged reaction to Wikileaks at NYTimes.com, including the statement to Fareed Zakaria by the chairman of the Joint Chiefs of Staff, Admiral Mullen, that “the leak would put the lives of some people at risk.”

The Lede added some context for that statement:

Despite that dire warning, Robert Gates, the defense secretary, told Congress in October that a Pentagon review “to date has not revealed any sensitive intelligence sources and methods compromised by the disclosure,” of the war logs by WikiLeaks.

The Guardian put today’s release into context, reporting that the embassy cable leaks sparks a global diplomatic crisis. Among other disclosures, the Guardian reported that the cables showed “Arab leaders are privately urging an air strike on Iran and that US officials have been instructed to spy on the UN’s leadership … a major shift in relations between China and North Korea, Pakistan’s growing instability and details of clandestine US efforts to combat al-Qaida in Yemen.” The Guardian’s new interactive of diplomatic cables is one of the best places online to browse the documents.

Is the “radical transparency” that Wikileaks both advocates for – and effectively forces – by posting classified government information “open government?” The war logs from Afghanistan are likely the biggest military intelligence leak ever. At this point in 2010, it’s clear that Wikileaks represents a watershed in the difficult challenge to information control that the Internet represents for every government.

On the one hand, Open Government Directive issued by the Obama administration on December 8, 2009 explicitly rejects releasing information that would threaten national security. Open government expert Steven Aftergood was crystal clear in June on that count: Wikileaks fails the due diligence review.

On the other hand, Wikileaks is making the diplomatic and military record of the U.S. government more open to its citizens and world, albeit using a methodology on its own site that does not appear to allow for the redaction of information that could be damaging to the national security interests of the United States or its allies. “For me Wikileaks is open govt,” tweeted Dominic Campbell. “True [open government] is not determined and controlled by govts, but redistributes power to the people to decide.”

The New York Times editorial board explored some of these tensions in a note to readers on its decision to publish Wikileaks.

The Times believes that the documents serve an important public interest, illuminating the goals, successes, compromises and frustrations of American diplomacy in a way that other accounts cannot match… The Times has taken care to exclude, in its articles and in supplementary material, in print and online, information that would endanger confidential informants or compromise national security. The Times’s redactions were shared with other news organizations and communicated to WikiLeaks, in the hope that they would similarly edit the documents they planned to post online.

…the more important reason to publish these articles is that the cables tell the unvarnished story of how the government makes its biggest decisions, the decisions that cost the country most heavily in lives and money. They shed light on the motivations — and, in some cases, duplicity — of allies on the receiving end of American courtship and foreign aid. They illuminate the diplomacy surrounding two current wars and several countries, like Pakistan and Yemen, where American military involvement is growing. As daunting as it is to publish such material over official objections, it would be presumptuous to conclude that Americans have no right to know what is being done in their name.

It seems that the Times and Guardian decided to make redactions from the diplomatic cables before publication. It’s not clear how that will compare to what will be posted on Wikileaks.org alongside the War Logs and Afghan Diaries.

Open government, radical transparency and the Internet

More transparency from the military, Congress and the White House regarding the progress of wars is important, desirable and perhaps inevitable. Accountability to civilian leadership and the electorate is a bedrock principle in a representative democracy, not least because of the vast amounts of spending that has been outlaid since 9/11 in the shadow government that Dana Priest reported out in Top Secret America in the Washington Post.

Wikileaks and the Internet together add the concept of asymmetric journalism to the modern media lexicon. File asymmetric journalism next to the more traditional accountability journalism that Priest practices or the database journalism of the new media crew online at the Sunlight Foundation and similar organizations are pioneering.

As Tim O’Reilly tweeted, “wikileaks *challenges* [open government government 2.0] philosophy. Challenges are good if we rise to them.” No question about the former point. Governments that invest in the capacity to maneuver in new media environment might well fare better in the information warfare the 21st century battlefield includes.

Open government is a mindset, but goes beyond new media literacy or harnessing new technologies. The fundamental elements of open government, as least as proposed by the architects of that policy in Washington now, do not include releasing diplomatic cables regarding espionage or private assessments of of world leaders. Those priorities or guidelines will not always be followed by the governed, as Wikileaks amply demonstrates.

Increasingly, citizens are turning to the Internet for data, policy and services. Alongside the efforts of government webmasters at .gov websites, citizens will find the rich stew of social media, media conglomerates or mashups that use government and private data. That mix includes sites like Wikileaks, its chosen media partners, the recently launched WLCentral.org or new models for accountability like IPaidABribe.com.

That reality reinforces that fact that information literacy is a paramount concern for citizens in the digital age. As danah boyd has eloquently pointed out, transparency is not enough. The new media environment makes such literacy more essential than ever, particularly in the context of the “first stateless news organization” Jay Rosen has described. There’s a new kind of alliance behind the War Logs, as David Carr wrote in the New York Times.

There’s also a critical reality: in a time of war, some information can and will have to remain classified for years if those fighting them are to have any realistic chances of winning. Asymmetries of information between combatants are, after all, essential to winning maneuvers on the battlefields of the 21st century. Governments appear to be playing catchup given the changed media environment, supercharged by the power of the Internet, broadband and smartphones. This year, we’ve seen a tipping point in the relationship of government, media and techology.

Comparing the Wikileaks War Logs to the Pentagon Papers is inevitable — and not exactly valid, as ProPublica reported. It would be difficult for the military to win battles, much less wars, without control over situational awareness, operational information or effective counterintelligence.

Given the importance of the ENIGMA machine or intercepts of Japanese intel in WWII, or damage caused by subsequent counterintelligence leaks from the FBI and elsewhere, working to limit intelligence leaks that damage ongoing ops will continue to be vitally important to the military for as long as we have one. Rethinking the definitions for secrecy by default will also require hard work. As the disclosures from the most recent release continue to reverberate around the globe, the only certainty is that thousands of State Department and Defense Department workers are going to have an extra headache this winter.