Another Tip That Education Is Changing: Open Stax Textbooks

Costs of education need to come down. Open course materials are growing. Maybe education will indeed undergo a transformation in the next ten years. There are many things that will need to change for true education reform to take place. But better resources matter. Enter Rice University. Its OpenStax College initiative tries to address the problem of source fragmentation. In other words, resources, resources everywhere but no time to synch may be less of a problem than it has been so far. One nice touch is format flexibility: web, e-textbook, or hard copy options are available. “The first five textbooks in the series–Physics, Sociology, Biology, Concepts of Biology, and Anatomy and Physiology–have been completed, and the Physics and Sociology textbooks are up at openstaxcollege.org. The model is curious:

Using philanthropic funding, Baraniuk and the team behind OpenStax contracted professional content developers to write the books, and each book went through the industry-standard review cycle, including peer review and classroom testing. The books are scope- and sequence-compatible with traditional textbooks, and they contain all of the ancillary materials such as PowerPoint slides, test banks, and homework solutions.

So there is professional level seeding of content while also allowing for wiki-like contribution:

Each book has its own dashboard, called StaxDash. Along with displaying institutions that have adopted the book, StaxDash is also a real-time erratum tracker: Faculty who are using the books are encouraged to submit errors or problems they’ve found in the text. “There’s also the issue of pointing out aspects of the text that need to be updated,” notes Baraniuk, “for example, keeping the Sociology book up-to-date as the Arab Spring continues to evolve. People can post these issues, and our pledge is that we are going to fix any issues as close to ‘in real time’ as possible. These books will be up-to-date in a matter of hours or days instead of years.” When accessing a book through its URL on Connexions, students and faculty will always get the most up-to-date version of the book. Faculty can, however, use the “version control” feature on Connexions to lock in a particular version of the book for use throughout a semester.

If you thought that keeping up with authoritative versions of an ebook and citing it (trust me it is odd to cite to a location in a Kindle book) was messy, this new model will throw you. Then again, that is a small issue.

Group contributions for the latest on an issue and the ability to choose versions is a great idea. Law texts that could update the latest cases or a change in legislation as they happen and then be refined overtime would be wonderful. Of course teachers use other ways to reach these goals. But if crowds/commons style approaches to texts work, we may see better and less expensive versions of textbooks. How the system will mangage disputes about content and education boards’ issues with approval remains to be seen. Still, the promise of this approach should make the miasmic aspects of education boards look silly and create a press for improved ways to have quality content available for educators and most important, for students.

Hi, Keep It Open, But Behind a Paywall

Andrew Morin and six others have argued for open access to source code behind scientific publishing so that the work can be tested and live up to the promise of the scientific method. At least, I think that is the claim. Ah irony, the piece is in Science and behind, oh yes, a pay wall! As Morin says in Scientific American:

“Far too many pieces of code critical to the reproduction, peer-review and extension of scientific results never see the light of day,” said Andrew Morin, a postdoctoral fellow in the structural biology research and computing lab at Harvard University. “As computing becomes an ever larger and more important part of research in every field of science, access to the source code used to generate scientific results is going to become more and more critical.”

If the essay were available, we might assess it better too.

Nonetheless, the idea is interesting. It reminds me of work by Victoria Stodden who has looked at this issue for some time. From her bio:

Victoria Stodden is assistant professor of Statistics at Columbia University and serves as a member of the National Science Foundation’s Advisory Committee on Cyberinfrastructure (ACCI), and on Columbia University’s Senate Information Technologies Committee. She is one of the creators of SparseLab, a collaborative platform for reproducible computational research and has developed an award winning licensing structure to facilitate open and reproducible computational research, called the Reproducible Research Standard. She is currently working on the NSF-funded project: “Policy Design for Reproducibility and Data Sharing in Computational Science.”

Victoria is serving on the National Academies of Science committee on “Responsible Science: Ensuring the Integrity of the Research Process” and the American Statistical Association’s “Committee on Privacy and Confidentiality” (2013).

In other words, if you are interested in thisarea, you may want to contact Victoria as well as Mr. Morin.

Infrastructure: The Social Value of Shared Resources

I am excited to announce that Oxford University Press has published my book, Infrastructure: The Social Value of Shared Resources. I owe a huge debt to my Madisonian colleagues for their support along the way. I will post more about the book in the next few weeks, but here are some links and a short abstract:

The book is described here (OUP site) and here (Amazon). The Introduction and Table of Contents are available here.

Short abstract:

“Infrastructure resources are at the center of many contentious public policy debates, ranging from what to do about our crumbling roads and bridges, to whether and how to protect of our natural environment, to patent law reform, to electromagnetic spectrum allocation, to providing universal health care, to energy policy, to network neutrality regulation and the future of the Internet. Each involves a battle to control infrastructure resources, set the terms and conditions under which the public gets access, and determine how the infrastructure and various infrastructure-dependent systems evolve over time. This book advances strong economic arguments for managing and sustaining infrastructure resources as commons. The book identifies resource valuation and attendant management problems that recur across many different fields and many different resource types, and it develops a functional economic approach to understanding and analyzing these problems and potential solutions.”

Some thoughts on Julie Cohen’s new book Configuring the Networked Self: Law, Code, and the Play of Everyday Practice

Cross-posted at Concurring Opinions for a symposium on Julie Cohen’s important new book, Configuring the Networked Self: Law, Code, and the Play of Everyday Practice (Yale University Press 2012).

Julie Cohen’s book is fantastic. Unfortunately, I am late to join the symposium, but it has been a pleasure playing catch up with the previous posts. Reading over the exchanges thus far has been a treat and a learning experience. Like Ian Kerr, I felt myself reflecting on my own commitments and scholarship. This is really one of the great virtues of the book. To prepare to write something for the blog symposium, I reread portions of the book a second time; maybe a third time, since I have read many of the law review articles upon which the book is based. And frankly, each time I read Julie’s scholarship I am forced to think deeply about my own methodology, commitments, theoretical orientation, and myopias. Julie’s critical analysis of legal and policy scholarship, debate,and rhetoric is unyielding as it cuts to the core commitments and often unstated assumptions that I (we) take for granted.

I share many of the same concerns as Julie about information law and policy (and I reach similar prescriptions too), and yet I approach them from a very different perspective, one that is heavily influenced by economics. Reading her book challenged me to confront my own perspective critically. Do I share the commitments and methodological infirmities of the neoliberal economists she lambasts? Upon reflection, I don’t think so. The reason is that not all of economics boils down to reductionist models that aim to tally up quantifiable costs and benefits. I agree wholeheartdly with Julie that economic models of copyright (or creativty, innovation, or privacy) that purport to accurately sum up relevant benefits and costs and fully capture the complexity of cultural practices are inevitably, fundamentally flawed and that uncritical reliance on such models to formulate policy is distorting and biased toward seemless micromanagement and control. As she argues in her book, reliance on such models “focuses on what is known (or assumed) about benefits and costs, … [and] tends to crowd out the unknown and unpredictable, with the result that play remains a peripheral consideration, when it should be central.” Interestingly, I make nearly the same argument in my book, although my argument is grounded in economic theory and my focus is on user activities that generate public and social goods. I need to think more about the connections between her concept of play and the user activities I examine. But a key shared concept is that indeterminacy in the environment and the structure of rights and affordances sustains user capabilties and this is (might be) normatively attractive whether or not users choose to exercise the capabilities. That is, there is social (option) value is sustaining flexibility and uncertainty.

Like Julie, I have been drawn to the Capabilities Approach (CA). It provides a normatively appealing framework for thinking about what matters in information policy—that is, for articulating ends. But it seems to pay insufficient attention to the means. I have done some limited work on the CA and information policy and hope to do more in the future. Julie has provided an incredible roadmap. In chapter 9, The Structural Conditions of Human Flourishing, she goes beyond the identification of capabilities to prioritize and examines the means for enabling capabilities. In my view, this is a major contribution. Specifically, she discusses three structural conditions for human flourishing: (1) access to knowledge, (2) operational transparency,and (3) semantic discontinuity to be a major contribution. I don’t have much to say about the access to knowledge and operational transparency discussions, other than “yep.” The semantic discontinuity discussion left me wanting more, more explanation of the concept and more explanation of how to operationalize it. I wanted more because I think it is spot on. Paul and others have already discussed this, so I will not repeat what they’ve said. But, riffing off of Paul’s post, I wonder whether it is a mistake to conceptualize semantic discontinuity as “gaps” and ask privacy, copyright, and other laws to widen the gaps. I wonder whether the “space” of semantic discontinuities is better conceptualized as the default or background environment rather than the exceptional “gap.” Maybe this depends on the context or legal structure, but I think the relevant semantic discontinuities where play flourishes, our everyday social and cultural experiences, are and should be the norm. (Is the public domain merely a gap in copyright law? Or is copyright law a gap in the public domain?) Baselines matter. If the gap metaphor is still appealing, perhaps it would be better to describe them as gulfs.