[TOS] Copyright assignment considered harmful?

Luis Ibanez luis.ibanez at kitware.com
Wed Aug 24 22:02:30 UTC 2011

Hi Sebastien,


The full description is here:

but let me summarize below:

The Insight Journal is supported by a build infrastructure
that is running in a Xen virtual machine.

When you submit a paper to the Journal, it must include
with it, source code, input data and parameters, as well
as the expected output data. In the field of medical image
analysis, most problems are of the form:

1) I have this input images
2) I have this code that implements an algorithm
3) The algorithm requires some numeric parameters
4) After running the algorithm in the input images we
    get some output image. (e.g. the Segmentation of
    a brain tumor from a CT scan image).

When submitting your paper, typically you pack in
a .zip file or .tar file the set of required materials:

a) PDF article
b) source code
c) input data
d) expected output data
e) input parameters

Full details here:


As soon as you submit the paper,
the build infrastructure does:

* expands the .zip or .tgz file
* Configures it with CMake
* Build the code (typically C++ in our case)
* Run the tests (that you pre-configured with CTest)
* Compare the output of tests to the output that you provided
* Gather the results and submit them as a first review

So, your first review in the Journal is the outcome of the
build process of your code.

5 stars = builds and runs without errors, and replicates output
4 stars = runs without errors but output is different
3 stars = runs with errors
2 stars = doesn't build (compile/link)
1 star = failed to configure

For example,

for this paper:

The builds are here:

You can see that two tests failed due to time outs
(they took more than 1,500 seconds...)


All the same materials are made available to readers,
so, you can download the PDF file, the source code,
input data, parameters and expected output data here:


(the "Download Package" link on the right side of the page:


You can then attempt to replicate the work described
in the paper.  Expand the .zip file, configure with cmake,
build, run the tests,... and at that point, you can go back
to the online journal and post a review based on your
experience reproducing the work.

This one, for example, shows the combination of
automatic reviews and human reviews:


Other Journals in the domain of medical imaging,
simply ask reviewers to volunteer to do reviews,
they send them the PDF of the article, and the reviewer
is supposed to judge the paper based entirely on the
reading of the text, and without any attempt to replicate
the work.  Such type of review has sent the "scientific"
community back three thousand years to the Aristotelian
times when philosophers judged the truth of statements
based only on their aesthetic appeals, and without regard
for the actual reality of the statements.

It wasn't until Galileo that we started running experiments
instead of basing our arguments only on thought processes.


We got to build the Insight Journal, because in our process of
writing the source code of the Insight Toolkit (ITK), we realized
that most of the material that gets published in Journals is
irreproducible (and therefore useless).

Academics have been driven to publish just to get brownie
points for their salaries, raises, and tenure evaluations, and
not for the purpose of sharing useful information with their

If you are interested in reproducibility, you may want to look at:




I'll be happy to give you more details on how the reproducibility
verification of the Insight Journal was implemented.

Currently we are extending it to support Git as a back end for
code storage, and accepting Virtual Machines as platforms
where the code and data have been already configured.



On Tue, Aug 23, 2011 at 7:54 PM, Sebastian Benthall <sbenthall at gmail.com> wrote:
> Luis,
> Could you explain more about how your open-access journal is reproducible,
> or point me to something describing how?
> Thanks,
> Sebastian
> On Tue, Aug 23, 2011 at 4:37 PM, Luis Ibanez <luis.ibanez at kitware.com>
> wrote:
>> Mel,
>> Academic publishing is split today into
>> two parallel universes:
>> A) The Open Access community
>>                   and
>> B) The Traditional Publishers using business models
>>     that pre-date the industrial revolution.
>> Some publishers are slowly moving from (B) to (A).
>> Some others still doing (B) are hoping that (A)
>> will go away and that they will survive with their
>> traditional business models.
>> Sadly enough, most of the scientific and technical
>> societies do (B).  For example IEEE, ACM  and ACS.
>> IEEE went even to the extreme of lobbying *against*
>> the NIH Public Access policy.
>>             http://publicaccess.nih.gov/
>> (That requires all NIH publicly funded research
>> to be published in Open Access so it is available
>> to the taxpayers who... paid for it).
>> IEEE want's to protect its stream of +$50M/y revenue
>> that results from Journal subscriptions and conferences.
>> As Clay Shirky said: Institutions lose track of their mission
>> and quickly turn to focus on self-preservation...
>> Fresh minded publishers, such as PLoS and
>> BiomedCentral have embraced the Open Access
>> model, and have promoted policies in support of
>> Open Access publishing for Federally Funded
>> research (there is an ongoing bill that will extend
>> the NIH public access policy to other 11 Federal
>> agencies).
>> Your options today,
>> then come down to:
>> 1) Find an Open Access Journal in the
>>     area of your interest, and publish with them.
>> or
>> 2) Attempt to not transfer your copyright when
>>    publishing with a traditional Journal that
>>    does (B), and instead just give them the
>>    license that they need to publish your work.
>>    Typically a Creative Commons by Attribution
>>    License should do the trick. In this option,
>>    be ready for a fight...
>> or
>> 3) Start your own Open Access Journal.
>> In the domain of Medical Image Analysis,
>> we took option (3), about six years ago:
>>      http://www.insight-journal.org/
>> We made it open, we made it free,
>> we made it reproducible.
>> ---
>> The typical argument that you will hear is
>> that Traditional Journal in (B) have the "best
>> reputation", and highest "impact factors",
>> and that therefore you should bend to their
>> primitive intellectual property practices.
>> The reality in the ground is that "impact
>> factor" is a bogus measure, computed by
>> a company using a "proprietary method",
>> that nobody have ever managed to reproduce;
>> and that "Reputation" is something that we
>> (as a community) do for the Journals, when
>> we send our best papers to them, review
>> (for free) for them, serve as associate
>> editors (for free) for them, serve as editors
>> (for free) for them.   It is quite a nice business
>> model, when you think about it. They get their
>> content for free, the quality verification for free,
>> and sell content at high prices.
>> For example,
>> some Elsevier Journals has higher profit margins
>> than Microsoft and Google:
>> http://www.righttoresearch.org/blog/6-reasons-open-access-matters-to-the-medical-commu.shtml
>> --
>> PLoS gained a reputation of excellence
>> in just about six years, beating Science
>> and Nature, that have been around for
>> more than a century.
>> So, reputation can be build,  as long as
>> a community commits to its principles.
>> (...you know that better than most of us..)
>> You probably will also be exposed to the
>> fallacy of "Publish or Perish", which sadly
>> is the mother of all the current mediocrity
>> in the larger field of scientific research.
>> It doesn't take too long to figure out that
>> if academics are rewarded for the number
>> of published papers, then they will publish
>> as many paper as they can, with as little
>> content as they can. Helas, that's what
>> we get today.
>> ---
>> Stick to your guns and your Open Source
>> instincts. Academic publishing is broken,
>> and Open Access is part of the remedy.
>>      Luis
>> ------------------------
>> On Sat, Aug 20, 2011 at 2:12 AM, Mel Chua <mel at redhat.com> wrote:
>> > (The subject line is an allusion to
>> >
>> > http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html.)
>> >
>> > As some of you know, I started grad school this week. And... culture
>> > shock.
>> > Ohhhh boy, culture shock. (Yes, I know every professor who's had me for
>> > POSSE is now chortling with we-told-you-so glee.) One incident came
>> > today,
>> > when at the urging of Karl Fogel, who runs http://questioncopyright.org,
>> > I
>> > looked into academic copyright -- specifically, what's the deal for the
>> > places TOS typically submits to (FIE and SIGCSE)?
>> >
>> > A few hours and a quietly dawning horror later, I... think I've screwed
>> > up.
>> > My first couple co-submissions of work on teaching open source are,
>> > ironically, *unable* to be open-licensed. I've documented my naive
>> > findings
>> > here:
>> >
>> > http://blog.melchua.com/2011/08/20/in-which-mel-is-saddened-and-bewildered-by-academic-copyright-assignments/
>> >
>> > Please tell me that I'm missing something. How can we get
>> > academically-published TOS output released under open licenses? Why do
>> > we
>> > put up with this? Yes, I understand the publishing industry needs to
>> > make
>> > money and this "way of doing things" was well-intentioned at the time
>> > they
>> > were designed, but... but... why?
>> >
>> > --Mel
>> >
>> > PS: This isn't the only thing I've written about academic culture shock,
>> > btw
>> > -- for instance,
>> >
>> > http://blog.melchua.com/2011/08/17/academic-culture-shock-grad-student-ta-training/.
>> > _______________________________________________
>> > tos mailing list
>> > tos at teachingopensource.org
>> > http://lists.teachingopensource.org/mailman/listinfo/tos
>> >
>> _______________________________________________
>> tos mailing list
>> tos at teachingopensource.org
>> http://lists.teachingopensource.org/mailman/listinfo/tos

More information about the tos mailing list