In order to enable an iCal export link, your account needs to have an API key created. This key enables other applications to access data from within Indico even when you are neither using nor logged into the Indico system yourself with the link provided. Once created, you can manage your key at any time by going to 'My Profile' and looking under the tab entitled 'HTTP API'. Further information about HTTP API keys can be found in the Indico documentation.
Additionally to having an API key associated with your account, exporting private event information requires the usage of a persistent signature. This enables API URLs which do not expire after a few minutes so while the setting is active, anyone in possession of the link provided can access the information. Due to this, it is extremely important that you keep these links private and for your use only. If you think someone else may have acquired access to a link using this key in the future, you must immediately create a new key pair on the 'My Profile' page under the 'HTTP API' and update the iCalendar links afterwards.
Permanent link for public information only:
Permanent link for all public and protected information:
The goal of EPrints is to facilitate high quality, high value repositories that support high volumes of daily deposits, for open access, scholarly collections, teaching, data or preservation.
EPrints v3, released in January 2007, provided important steps in this direction, improving metadata quality by providing AJAX assistance to depositers, adding new value to repository contents by giving readers the ability to export material to a wide range of applications and services and allowing depositors to import eprints from high quality services like Crossref and PubMed.
EPrints consists of a core API plus a suite of plugins that provide all of the user interface and repository functionality. Plugins extend and alter any EPrints facilities, providing a huge increase in scope for open source community development. The most recent plugins to be released provide new features for repository-scale metadata quality management, configuration control and user statistics.
This tutorial is aimed at those who are new to the area of repositories and who want to learn more about key advocacy and policy issues. The tutorial will include information and advice on putting together an institutional advocacy campaign and developing policies for your repository. There will be opportunities for participants to share experiences and to ask questions. The tutorial will include a practical exercise in developing an advocacy presentation. Participants with experience of advocacy are welcome to attend the session to share their experiences, but should bear in mind that it is aimed primarily at those looking for help and advice in advocacy matters.
This tutorial is aimed at those starting to use, or planning to use, the OAI Protocol for Metadata Harvesting (OAI-PMH). I will review the fundamental concepts and mechanics of the OAI Protocol for Metadata Harvesting (OAI-PMH) from the perspectives of both data providers and service providers. I will then highlight selected best practices for data providers (and pitfalls to avoid) and implications for service providers.
The best practices discussion will include OAI identifiers, date-stamps, deleted records, sets, resumption tokens, about containers, branding, error conditions, HTTP server issues, and a few pointers regarding what makes for good, shareable metadata.
Tutorial 4 - Object models and object representationMain auditorium
This tutorial will provide a practical overview of current practices in modelling complex or compound digital objects. It will examine some of the key scenarios around creating complex objects and will explore a number of approaches to packaging and transport. Taking research papers, or scholarly works, as an example, the tutorial will explore the different ways in which these, and their descriptive metadata, can be treated as complex objects. Relevant application profiles and metadata formats will be introduced and compared, such as Dublin Core, in particular the DCMI Abstract Model, and MODS, alongside content packaging standards, such as METS MPEG 21 DIDL and IMS CP.
Finally, we will consider some future issues and activities that are seeking to address these. The tutorial will be of interest to librarians and technical staff with an interest in metadata or complex objects, their creation, management and re-use.
This tutorial will review current access management technologies and invite participants to discuss use cases and requirements for access management, particularly with respect to scholarly archives and their users. The presenters will describe the concepts and architecture of Federated Access Management (FAM) with reference to some large-scale federation implementations, and discuss the challenges faced particularly in Identity Management by academic institutions. The tutorial will include a practical demonstration of how FAM can be applied to an Open Archive repository.
Tutorial 6 - Managing a repository: which legal aspects are really important?Council chamber
The tutorial 'Managing a repository: which legal aspects are really important?' gives an overview of the legal aspects managing a repository. Issues as a copyright policy for an institution, publishing agreements and deposit agreements will pass in review. The most important issues will be emphasized.
The tutorial will be interactive. Participants are requested to hand in questions or cases they experience in their daily repository life beforehand. These questions will be dealt with during the tutorial. The tutorial includes several cases the participants will work on in smaller groups.
The OAI-PMH was released in 2001 and stabilized at v2.0 in 2002. Since then there has been steady growth in adoption of the protocol. Support for the OAI-PMH is assumed for base-level interoperability between institutional repositories, and is also provided for many other collections of scholarly material. I will review the current landscape and reflect on some milestones and issues.
(Cornell University, USA)
Video in CDS
OAI Object Re-Use and Exchange40m
YouTube, Flickr, del.icio.us, blogs, message boards and other "Web 2.0" related technologies are indicative of the contemporary web experience.
There is a growing interest in appropriating these tools and modalities to support the scholarly communication process. This begins with leveraging the intrinsic value of scholarly digital objects beyond the borders of the hosting repository. There are numerous examples of the need to re-use objects across repositories in scholarly communication.
These include citation, preservation, virtual collections of distributed objects, and the progression of units of scholarly communication through the registration-certification-awareness-archiving chain.
The last several years have brought about numerous open source repository systems and their associated communities. The Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) has been the initial catalyst for repository interoperability. However, there is now a rising interest in repositories no longer being static components in a scholarly communication system that merely archive digital objects deposited by scholars. Rather, they can become building blocks of a global scholarly communication federation in which each individual digital object can be the ore that fuels a variety of applications.
Both the interest in this type of federation, and the insights gained thus far are sufficiently strong to move beyond prototypes and to support an effort to formally specify this next level of interoperability across repositories. Through the support of the Mellon Foundation, a two-year international initiative to define this interoperability fabric has started in October 2006. The effort is in the context of the Open Archives Initiative, and is named Object Re-Use and Exchange (ORE). OAI-ORE is intended to be a complement to OAI-PMH.
OAI-ORE is coordinated by Carl Lagoze and Herbert Van de Sompel, and consists of international experts on Advisory, Technical and Liaison Committees. The Technical Committee held its first meeting in January
2007 and began its initial work to develop, identify, and profile extensible standards and protocols to allow repositories, agents, and services to interoperate in the context of use and reuse of compound digital objects beyond the boundaries of the holding repositories.
In this presentation, we will give an overview of the current activities, including: defining the problem of compound documents within the web architecture, enumerating and exploring several use cases, and identifying likely adopters of OAI-ORE. More information about ORE can be found at: http://www.openarchives.org/.
Dr.Herbert Van De Sompel
(Los Alamos National Laboratory, USA)
Video in CDS
Author identification: national and disciplinary approaches30m
Author identication can be provided by authors themselves, or be intermediaries. Authors can be identified ex-post, within sets of bibliographic data, or ex-ante, when publication data is composed.
Leo Waaijers introduces the digital author identifier, as working in the DARE project. It is based on old library technology, yet compliant with the forthcoming ISO standard on party identification ISPI.
Thomas Krichel introduces the ACIS project (http://acis.openlib.org) and its first implementation in the RePEc author service (http://authors.repec.org). This is a low-cost approach. It relies on authors themselves to register. A central registration is a pure ex-post operation. However, it can be combined with an archival operation to permit additional ex-ante registered author data.
Herbert Van De Sompel
(Los Alamos National Laboratory, USA)
OpenDOAR Policy tools and applications30m
OpenDOAR conducted a survey of the world's repositories that showed 2/3 as having unusable or missing policies for content re-use and other issues. Such policies are essential for service providers to be able to develop innovative services that use the full potential of open access. OpenDOAR has developed a set of policy generator tools for repository administrators and is contacting administrators to advocate policy development. It is hoped that one outcome from this work will be some standardisation of policies in vocabulary and intent. Other developments include an OpenDOAR API. This presentation looks at the way that the tools and API have been developed and the implcations for their use.
(University of Nottingham, SHERPA, UK)
Video in CDS
Measuring impact revisited - an update on infrastructure, methods and techniques20m
Impact is generally defined as any change or outcome resulting from an activity. In case of scientific research publications are the quantifiable outcome of the research process. The presentation will therefore focus on electronic publication impact as a limited but rather well defined sub-field of research impact. Publication impact can be measured by author or reader generated indicators. Author generated indicators would be citations. Reader generated indicators would be usage. Usage data can be collected through webserver or linkresolver logs. It has to be normalized in order to be shared and analyzed meaningfully. There are some initiatives to provide a suitable infrastructure including publisher data (COUNTER/SUSHI) and data collected through open access repositories. Citation as well as usage data can be analyzed quantitatively or structurally. These analyses can be combined or complemented to create new metrics to add to the ISI impact factor (IF).
(Stuttgart University Library, Germany)
Video in CDS
MESUR: metrics from scholarly usage of resources20m
Usage data is increasingly regarded as a valuable resource in the assessment of scholarly communication items. However, the development of quantitative, usage-based indicators of scholarly impact is still in its infancy. The Digital Library Research & Prototyping Team at the Los Alamos National Laboratory's Research library has therefore started a program to expand the set of usage-based tools for the assessment of scholarly communication items. The two-year MESUR project, funded by the Andrew W. Mellon Foundation, aims to define and validate a range of usage-based impact metrics, and issue guidelines with regards to their characteristics and proper application.
The MESUR project is constructing a large-scale semantic model of the scholarly community that seamlessly integrates a wide range of bibliographic, citation and usage data. Functioning as a reference data set, this model is analyzed to characterize the intricate networks of typed relationships that exist in the scholarly community. The resulting characterizations guide a program for the definition and validation of usage-based impact metrics. This lecture will provide an overview of MESUR's progress so far, followed by a discussion of the architecture and standards that MESUR has adopted.
The results of a preliminary analysis of the generated model will be discussed. The lecture will conclude with an overview of MESUR's future activities and expected outcomes.
(Los Alamos National Laboratory, USA)
Video in CDS
Usage statistics and demonstrator services20m
An understanding of the use of repositories and their contents is clearly desirable for authors and repository managers alike, as well as those who are analysing the state of scholarly communications. A number of individual initiatives have produced statistics of variious kinds for individual repositories, but the real challenge is to produce statistics that can be collected and compared transparently on a global scale. This presentation details the steps to be taken to address the issues to attain this capability
DRIVER: Building a Sustainable Infrastructure of European Scientific Repositories30m
The acronym DRIVER stands for “Digital Repository Infrastructure Vision for European Research”. Ten partners from eight countries have entered into an international partnership, to connect and network as a first step more than 50 physically distributed institutional repositories to one, large-scale, virtual Knowledge Base of European research. Universities and research organisations around the world currently build repositories, whose overall number is estimated to exceed 600 by far. As the academic information landscape is already highly fragmented, DRIVER is the trans-national catalyst to overcome local, isolated efforts and to stop fragmentation by offering one harmonised, virtual knowledge resource. DRIVER currently builds a production quality test-bed to assist the development of a knowledge infrastructure across Europe.
DRIVER as a project, funded by the “Research Infrastructure” unit of the European Commission, is also preparing for the future expansion and upgrade of the Digital Repository infrastructure across Europe. Ultimately repositories from all European countries should be included, following a set of pragmatic guidelines and committed to trans-national collaboration. DRIVER is also seeking early collaboration with international repository providers from other continents, including the United States, Australia and Asia.
(Goettingen State and University Library, Germany)
Video in CDS
The repository ecology: an approach to understanding repository and service interactions40m
An increasing number of university institutions and other organisations are deciding to deploy repositories and a growing number of formal and informal distributed services are supporting or capitalising on the information these repositories provide. Despite reasonably well understood technical architectures, early majority adopters may struggle to articulate their place within the actualities of a wider information environment. The idea of a repository ecology provides developers and administrators with a useful way of articulating and analysing their place in the information environment, and the technical and organisational interactions they have, or are developing, with other parts of such an environment. This presentation will provide an overview of the concept of a repository ecology and examine some examples from the domains of scholarly communications and elearning.
Libraries and Publishers presentationsMain auditorium
On the golden road : Open access publishing in particle physics30m
The particle physics community has over the last 15 years achieved so-called full green open access through the wide dissemination ofpreprints via arXiv, a central subject repository managed by Cornell University. However, green open access does not alleviate the economical difficulties of libraries as these still are expected to offer access to versions of record of the peer-reviewed literature. For this reason the particle physics community is now addressing the issue of gold open access by converting a set of the existing core journals to open access.
A working party works now to bring together funding agencies, laboratories and libraries into a single consortium, called SCOAP3 (Sponsoring Consortium for Open access Publishing in Particle Physics). This consortium will engage with publishers towards building a sustainable model for open access publishing. In this model, subscription fees from multiple institutions are replaced with contracts with publishers of open access journals where the SCOAP3 consortium is a single financial partner.
Libraries and Publishers presentationsMain auditorium
(Lund University, Sweden)
Open Access Forever -- Or Five Years, Whichever Comes First: Progress on Preserving the Digital Scholarly Record45m
As the migration of scholarly communication from print to digital continues to progress rapidly, and as Open Access to that research literature and related data becomes more common, the challenges of insuring that the scholarly record remain available over time becomes more urgent. There has been good progress on those challenges in recent years, but many problems remain. The current state of the curation and preservation of digital scholarship over its entire lifecycle will be reviewed, and progress on problems of specific interest to scholarly communication will be examined. The difficulty of curating the digital scholarly record and preserving it for future generations has important implications for the movement to make that record more open and accessible to the world, so this a timely topic for those who are interested in the future of scholarly communication.
Those setting up, or planning to set up, a digital repository may be interested to know more about what has gone before them. What is involved, what is the cost, how many people are needed, how have others made the case to their institution, and how do you get anything into it once it is built?
I have recently undertaken a study of European repository business models for the DRIVER project and will present an overview of the findings.
The impact of self-archiving on journals and publishers is an important topic for all those involved in scholarly communication. There is some evidence that the physics arXiv has had no impact on physics journals, while 'economic common sense' suggests that some impact is inevitable. I shall review recent studies of librarian attitudes towards repositories and journals, and place this in the context of IOP Publishing's experiences with arXiv. I shall offer some possible reasons for the mis-match between these perspectives and then discuss how IOP has linked with arXiv and experimented with OA publishing. As well as launching OA journals we have co-operated with Cornell and the arXiv on Eprintweb.org, a platform that offers new features to repository users.
Group 1: This business of repositories: the case, the cost, the challenges2h4-3-006
This is for those interested in what is involved in conceiving, planning, building and marketing a digital repository for their institution or organisation. We will discuss various models and approaches in their own contexts, drawing out the lessons on what works, when and how - and what doesn't. Real-life case studies will form part of the activity. Delegates will be able to go away with some useful principles that they can apply in their own situation.
This breakout session has a technical-organisational focus and touches upon the following topics:
- aggregation: aggregation services can provide added value that goes
- architectures: networked infrastructures can mediate between service providers and data providers.
The session involves major initiatives in the field: OAI-practice as experienced by important service providers and implications for a smooth, machine-based content aggregation will be addressed. The relation of distributed and aggregated content and, respectively, the relation of data-providers and service-providers will be furthered: added values enabled by novel approaches to data re-use such as ORE and infrastructural architectures for inter-repository network-operation such as those coming from the DRIVER-project will be discussed. Participants will be provided time and scope to raise issues, share their experiences and elaborate possible scenarios.
Group 4: Repositories in an institutional context (Technical discussion)2h61-1-009
Repositories are starting to take their place within an institution¹s infrastructure, sitting alongside portals, course management systems, virtual research environments and others. But in considering how a repository can best serve an institution it is important to consider how this technology will integrate with existing systems, so that it doesn¹t become a standalone white elephant that is a nuisance to use. This breakout will consider the technical ways in which repositories can be presented as a system that is easy to use and interact with, from the searcher¹s, depositor¹s and administrator¹s perspectives. It will look at potential interaction through other institutional environments and seek to identify ways in which repositories can become a more integral part of the technical landscape. Bring along your examples of how repositories are being technically embedded within institutions to help share experience and ideas.
Researchers and end users presentationsMain auditorium
Doctoral e-Theses; experiences in harvesting on a national and European level30m
In many countries in Europe doctoral e-theses are an integrated part of institutional repositories, which have been set up in recent years. In some countries they have been harvested on a national level, like in the Netherlands with the ‘Promise of Science’ portal (http://www.darenet.nl/promiseofscience) .
In October 2006 SURFfoundation (The Netherlands), JISC (UK) and DIVA (funded through BIBSAM in Sweden) started a common project to harvest repositories with e-theses on an international scale and to set up a freely accessible European portal and test in practice the interoperability. The project, which aims at creating a value added service for doctoral e-Theses, will finish in June 2007. In the presentation we will show some lessons learned and the first results of the Demonstrator, an interoperable portal of European doctoral e-theses in five countries: Denmark, Germany, the Netherlands, Sweden and the UK.
Furthermore, we will present the developments regarding the European coordination of doctoral e-theses through the “GUIDE working group”, funded by JISC and SURF.
Gerard Van Westrienen
(SURF, The Netherlands)
Video in CDS
Science Commons supports Open Access through its Scholar's Copyright Project (SCP). This talk will outline the three core elements of the
SCP: Creative Commons licensing for open access publishing, Open Access Law journal-author agreements for converting journals to open access, and the Scholar's Copyright Addendum Engine for retaining rights to self-archive in meaningful formats and locations for future re-use.
More than 250 science and technology journals already publish under Creative Commons licensing while 35 law journals utilize the Open Access Law agreements. The Addendum Engine is a new tool created in partnership with SPARC and U.S. universities.
Dissemination or publication? Some consequences from smudging the boundaries between research data and research papers30m
Project StORe’s repository middleware will enable researchers to move seamlessly between the research data environment and its outputs, passing directly from an electronic article to the data from which it was developed, or linking instantly to all the publications that have resulted from a particular research dataset. Originally conceived as a means of improving information discovery and data curation, it may also be claimed that this enhancement to the functionality of repositories has significantly broadened the meaning of the terms publish and publication. By publishing data that may not have been through a process of peer review, and by making data public before a scholarly article is approved and printed, are we introducing new risks? Is the scholarly article invalidated as a first publication of research results? The process is regulated, including a mechanism for access control and user authentication, and may not therefore be described as open access in the truest sense, but by making source or synthetic research data publicly available in this way raises implications for the identity and purpose of scholarly publications.
Researchers and end users presentationsMain auditorium
(Utrecht University, Netherlands)
Open archives, the expectations of the scientific communities20m
Open archives (OA) started in physics more than 15 years ago with ArXiv, and have since played a more and more important role in the activity of the disciplin; actually, in many fields of physics, ArXiv has now become the major vector or scientific communication. We now have two communication channels in parallel, traditional scientific journals with peer review, and open archives, both with different functionalities and both indispensable. It it therefore interesting to try and transpose to other disciplins the scheme that has worked so well for physicists, which means that the reasons for the success of ArXiv should be analyzed.
Scientists do not care about the technicalities, and whether or not the OA is centralized, or distributed with a high level of interoperability. What they wish is to have one single interface where all the scientific information in their domain is available, with the same scientific classifications, etc.. In case of collaborations beween different institutions, they do not wish to have to deposit their texts several times in several repositories. Moreover, they wish to use OA that are relatively homogeneous concernig the content, and for instance do not want to use systems where the free access to the full text is not garanteed, or articles are mixed up with reports or raw data (as is seometimes the case in OAI repositories). Finally, long term stability is absolutely essential, if only to offer the possibility to quote the content of the OA in subsequent work. In this respect, we know that much of the material made available on personal sites and laboratory sites does not have the necessary longetivity to be really useable scientifically.
This analysis led the French CNRS to start the Hal project, a pluridisciplinary open archive strongly inspired by ArXiv, and directly connected to it. Hal actually automatically transfers data and documents to ArXiv for the relevant disciplins; similarly, it is connected to Pum Med and Pub Med Central for life sciences. Hal is customizable so that institutions can build their own portal within Hal, which then plays the role of an institutional archive (examples are INRIA, INSERM, ENS Lyon, and others). It includes a form of certification called "stamps". The rate of increase of the number of full text documents increases steadily, and now reaches around 1 500 per month, mostly due to self archiving from different communities (including humanities: archeololy, history, literature, etc.). A national agreement has been signed among most research French institutions to use Hal as a common archive.
Researchers in cosmology and astrophysics depend on the arXiv repository for the registration and dissemination of their work, as well as for current awareness, yet they continue to submit papers to journals for review. Could rapid quality certification be overlaid directly onto the arXiv repository?
This presentation introduces the RIOJA (Repository Interface to Overlaid Journal Archives) project, on which a group of cosmology researchers from the UK is working with UCL Library Services and Cornell University. The project is creating a tool to support the overlay of journals onto repositories, and will demonstrate a cosmology journal overlaid on top of arXiv. RIOJA will also work with the cosmology community to explore the social and economic aspects of journal overlay in this discipline: what other value, besides the quality stamp, does journal publication typically add? What are the costs of the ideal overlay journal for this community, and how could those costs be recovered? Would researchers really be willing to submit work to a new journal overlaid on the arXiv repository?