NOBUGS 2016

Europe/Copenhagen
Marble Hall (Copenhagen University)

Marble Hall

Copenhagen University

Thorvaldsensvej 40
Description

Welcome to NOBUGS 2016!

The eleventh NOBUGS Conference was held at Thorvaldsensvej 40 in Copenhagen, Denmark on the 17th to 19th October 2016. The NOBUGS (New Opportunities for Better User Group Software) Conference Series has the aim to foster collaboration and exchange between scientists and IT professionals working on software for X-ray, neutron and muon sources around the world. NOBUGS 2016 was organised by the European Spallation Source, University of Copenhagen and MAX IV Laboratory.

Proceedings

The Proceedings are ready. They can be downloaded from the materials link below. The persistent locations is DOI: 10.17199/NOBUGS2016.proc.

The individual articles are also linked from the landing page of the corresponding contribution.

Programme

We enjoyed a really exciting programme, including the following keynote speakers:

 


Conference Key Information

Abstract Booklet
Delegate Photo
Information for Participants
Proceedings
Organiser Email
    • 2:15 PM
      Social Event Tycho Brahe Planetarium, Gammel Kongevej 10, Copenhagen

      Tycho Brahe Planetarium, Gammel Kongevej 10, Copenhagen

    • 5:00 PM
      Welcome Reception Marble Hall, Thorvaldsensvej 40

      Marble Hall, Thorvaldsensvej 40

      Copenhagen University

      Thorvaldsensvej 40
    • 8:00 AM
      Registration Marble Hall, Thorvaldsensvej 40

      Marble Hall, Thorvaldsensvej 40

      Copenhagen University

      Thorvaldsensvej 40
    • Welcome Session Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 1
        Welcome Addresses
        Speakers: Prof. Kell Mortensen (Niels Bohr Institute, University of Copenhagen), Dr Peter Peterson (Oak Ridge National Laboratory), Tobias Richter (European Spallation Source ERIC)
      • 2
        Software Development for Movie Production and Rocket Science
        In this presentation I'll share some of my personal experiences related to the tricky business of scooping, managing, developing and deploying technical software in various contexts. The main focus will be on industrial production environments as well as an open source project you have likely never heard of, but almost certainly seen on the big screen. This project, dubbed OpenVDB, is a C++ library comprising a novel hierarchical data structure and a suite of tools for the efficient storage, rendering and simulation of sparse volumetric data discretized on three-dimensional grids. Ken Museth is the manager and Sr principle engineer of research and development in visual effects at DreamWorks Animation, California. He invented VDB, which is the enabling technology of OpenVDB, an open sourced library for efficient storage and simulation of VFX, which is setting a new standard in the movie industry (used in over 100 full feature movies). For this he was awarded a technical academy award in 2015. Prior to joining DreamWorks Animation in 2009 he worked on VFX in live action at Digital Domain for three years, was a full professor in computer graphics at Linkoping University for five years, and a senior research scientist at Caltech for five years. During the last period he also worked at NASA's Jet Propulsion Laboratory on trajectory design for the "Genesis" space mission. Since early 2014 Ken has also worked part-time for SpaceX on CFD simulations of the new Raptor engine powering the main stage of the Mars Colonial Transporter. Ken holds a PhD in theoretical physical chemistry from University of Copenhagen University and loves adventure motorcycle riding.
        Speaker: Dr Ken Museth (DreamWorks & SpaceX)
    • 10:30 AM
      Coffee Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Contributions 1 Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 3
        Addressing the Challenges of implementing the ESRF Data Policy
        by Alex de Maria, Armando Solé, and Andy Gotz on behalf of the ESRF Data Policy Implementation Team The ESRF, the European Synchroton, has recently adopted a Data Policy which will archive all data collected at the ESRF for 10 years and be made freely available as Open Data after an initial embargo period of 3 years (can be extended on request). Currently the ESRF produces 2 PBs of raw data annually. This means archiving at least 70 PBs of data over the next 10 years if one assumes a linear growth of data production. The Data Policy introduces a number of new challenges for the ESRF. These challenges include persistent user identities, user rights, metadata definition and standardisation, automated collecting of metadata, metadata catalogue, data containers, long term archiving, and finding and re-using data. This paper will describe how these challenges are being solved. The paper describes how it is possible for a mature synchrotron to adopt and implement a modern Data Policy largely built on existing standards like the ICAT metadata catalogue (icatproject.org) and the HDF5/Nexus data format/convention. Archiving such large quantities of data is largely due to the availability of off-the-shelf tape technology which continues to evolve and improve.
        Speaker: Andy Gotz (ESRF)
        Paper
        Slides
      • 4
        The growth of the ICAT family
        The ICAT project provides a metadata catalogue and related components to support  Large Facility experimental data and aspires to link all aspects of the research chain from proposal through to publication and can also be used to provide an implementation of a data policy as has been done at a number of facilities using ICAT. Over the last couple of years, the existing components of ICAT have seen improvements in functionality and performance. TopCAT, a GUI to work with multiple ICATs,  has changed dramatically preserving only the original concept and new components have been added to provide flexible data delivery solutions and to make an ICAT  installation easy to manage. These changes have been made in consultation with the ICAT community to ensure that the components are highly decoupled and that as far as possible backwards compatibility is maintained as more sites move their ICAT installations into production.
        Speaker: Dr Stephen M Fisher (RAL - STFC)
        Paper
        Slides
      • 5
        Building a Prototype Data Analysis as a Service : the STFC experience
        Modern instruments and detectors are capable of capturing large amounts of data in one scan, and experiments are becoming more sophisticated, with multiple techniques applied at once or dynamic structures such as chemical reactions being recorded. Data volumes have now grown so large that in many cases it is simply not practical for users to transport the data to their home institution. In other cases, the analysis chains are complex, with a combination of data analysis and simulation, requiring access to high performance computing, large memory machines and a complex software stack for effective and timely processing of data. These resources may not be consistently available to all users. As a consequence, many facilities are exploring how to best provide additional computing resources to enable users to access and analyse their data remotely after the experiment. STFC's Scientific Computing Department is working with the RAL based facilities (ISIS, Diamond, CLF) to implement and deploy a 'Data Analysis as a Service' platform (DAaaS) to overcome these issues. Such a system provides facility users with easy to use access to compute resources, collocated with the experimental data archives, to efficiently and easily process their data, within a managed, secure virtual environment. Commonly used software packages will be systematically made available via a deployment and configuration system, and the environment offered to users will be customised according to the nature of the experiment and requirements of the experimental team. The system will further support a number of interfaces, allowing both easy access to users for routine tasks as well as a more interactive environment for specialised usage. In this talk we describe our experience of configuring and deploying 'Data Analysis as a Service' for facilities at STFC's Rutherford Appleton Laboratory. We will introduce the technology stack we are using to build the system including our experience with Cloud systems, distributed storage systems such as Ceph, software distribution and packaging using Docker and CVMFS, and our experiences of different methods for providing remote desktop like services. We will discuss the role of the ICAT data catalogue system to provide an “intelligent” approach to control access and customisation. We will discuss our plans to extend and develop this system to provide a production environment, covering a range of analytic techniques to users within one service.
        Speaker: Mr Frazer Barnsley (STFC)
        Paper
        Slides
      • 6
        Image Data Management System (IDMS)
        The Image Data Management System (IDMS) is a platform for data management, data analysis and data visualization. Storing, processing and visualizing scientific image data-sets is a challenge because the data streams from the large X-Ray and neutron facilities increase at a rate exceeding the rate in which generic disk drives grow. Furthermore processing and visualizing these image data-sets are just as big a challenge as storing the data due to the limited processing power of generic desktop workstations. IDMS offers distributed storage with support for group collaborations. The offered storage scales horizontally with respect to capacity and new storage is added transparently to the users. IDMS storage is accessed either through a web interface or through IO services such as sftp, webdavs and ftps. On top of the storage capacity IDMS supports event based distributed computing mechanisms. This event system enables user-defined data processing pipelines based on storage system level events such as uploading directories or files to the system. Finally IDMS supports automatic preview generation of the scientific image data as well as in-browser 2D and 3D viewing of the created previews. One example of an image storage, processing and previewing pipeline is CT-Reconstruction. The CT-Reconstruction task is compute intensive and well suited for distributed computing. Running it through IDMS using a number of distributed compute nodes decreases the reconstruction time significantly. If a CT-reconstruction work-flow is specified on a specific folder then previews are automatically generated for the projections uploaded to IDMS. When all projections are uploaded the system uses distributed compute nodes to perform a 3D reconstruction of the projections. Finally 2D slice and 3D volume previews are generated from the reconstructed tomogram. Once the processing pipeline is specified all these processing and preview generation steps occur automatically without any user interaction. The generated previews are displayed through the web interface of the system along with basis statistics, histogram data and a contrast adjustment slider. This is shown for the projections in [figure 1]. Preview of the tomograms can be either 2D slices or 3D volume rendering shown in [figure 2] and [figure 3]. The 3D rendering is performed real-time and the user has the possibility to interact with the volume through the browser. Using IDMS for image previews has several advantages. First of all only the domain experts that produce the image data need to know the raw binary format such as width, height, data-type and offset. Secondly users can eliminate the step of opening images through a third party application such as MatLab or ImageJ which is time consuming when dealing with a large number of different projects and images. Thirdly once the preview settings are in place it's easy to share image previews between collaborators within a group as the IDMS data URL pointing to the data is all that is needed. [figure 1]: http://www.migrid.org/vgrid/eScience/Projects/NBI/IDMC/NOBUGS/figs/skull_proj.pdf [figure 2]: http://www.migrid.org/vgrid/eScience/Projects/NBI/IDMC/NOBUGS/figs/skull_tomo_2D.pdf [figure 3]: http://www.migrid.org/vgrid/eScience/Projects/NBI/IDMC/NOBUGS/figs/skull_tomo_3D.pdf
        Speaker: Dr Martin Rehr (Niels Bohr Institute, University of Copenhagen)
        Slides
      • 7
        Scientific data lifecycle at Elettra-Sincrotrone Trieste
        Elettra-Sincrotrone Trieste comprises Elettra synchrotron and FERMI free-electron laser. Combined, the two facilities serve 34 beamlines. The Scientific Computing Team supports the full data lifecycle. Proposal submission and evaluation are handled in the Virtual Unified Office. Data acquisition and experimental control are built on top of the TANGO control system on all but the oldest synchrotron beamlines. Data reduction, on- and off-line analysis workflows and visualisation are supported by a common framework. Data is stored and catalogued with access through the web portal. Elettra implements the PaNdata-like data policy since 2014.
        Speaker: Mr Milan Prica (Elettra)
        Paper
        Slides
      • 8
        High-Performance XPCS Data Reduction using Virtualized Computing Resources
        Demands for increased computing at synchrotron facilities are driven by new scientific opportunities often enabled by technological advances in detectors, as well as advances in data analysis algorithms. These advances generate larger amounts of data, which in turn require more computing power in order to obtain near real-time results. An example where advances in computation are critical is found in the X-ray Photon Correlation Spectroscopy (XPCS) technique, a unique tool to study the dynamic nature of complex materials from micrometer to atomic length scales, and time scales ranging from seconds to nanoseconds. The recent development and application of higher-frequency detectors allows the investigation of faster dynamic processes enabling novel science in a wide range of areas. A consequence of XPCS detector advancements is the creation of greater amounts of image data that must be processed within the time it takes to collect the next data set in order to guide the experiment. Parallel computational and algorithmic techniques and high-performance computing (HPC) resources are required to handle this increase in data. In order to realize this, the APS has teamed with the Computing, Environment, and Life Sciences directorate at Argonne to use a virtualized computing resource located on the Argonne site. Virtual computing environments separate physical hardware resources from the software running on them, isolating an application from the physical platform. The use of this remote virtualized computing resource and the OpenStack and Cloudera management tools affords the APS many benefits. The virtualized environment allows the APS to install, configure, and update its Hadoop-based XPCS reduction software easily and without interfering with other users on the system. Its scalability allows the provisioning of more computing resources when larger data sets are collected. The XPCS workflow starts with raw data streaming directly from the detector to a compressed file on the parallel file system located at the APS. Once the acquisition is complete, the data is automatically transferred using GridFTP to the Hadoop Distributed File System (HDFS) running on the virtualized resource in a different building. This transfer occurs over two dedicated 10 Gbps optical links. By bypassing intermediate firewalls, this dedicated connection provides a very low latency, high-performance data pipe between the two facilities. Immediately after the transfer, Hadoop MapReduce-based data reduction algorithms are run in parallel on the provisioned compute instances, followed by Python-based error-fitting code. Resources provisioned for typical use by the XPCS application includes approximately 120 CPU cores, 500 GB of distributed RAM, and 20 TB of distributed disk storage. Provenance information and the resultant reduced data are added to an HDF5 file, which is automatically transferred back to the APS for interpretation. This system is in regular use at the 8-ID-I beamline of the APS. The whole reduction process is completed shortly after data acquisition, typically in less than one minute - a significant improvement over previous setups. The faster turnaround time helps scientists make time-critical, near real-time adjustments to experiments, enabling greater scientific discovery. *Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357.
        Speaker: Mr Nicholas Schwarz (Argonne National Laboratory)
    • 12:35 PM
      Lunch Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Contributions 2 Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 9
        Towards Holistic Data Processing for the User
        A new generation of instruments has resulted in increased complexity at all levels of the data chain. Furthermore, the experiments on these instruments, and the desire to compare the results to more detailed models, leads to a very large and diverse computational environment. Yet our goal is to facilitate the use of such facilities to expedite the users’ knowledge production. This contribution will provide several examples, from neutron beam lines at Oak Ridge National Laboratory, of how specialized computing functionality is being applied in various areas of data processing. Aspects that are working well and those where there are challenges will be discussed. Some of the topics to be presented will be integrating ever complex “laboratory on a beamline” concepts, utilizing computing platforms from the user’s tablet or PC to national scale supercomputing facilities, and working with specialized analysis codes. These analysis codes will include ab initio methods among others. Several necessary tradeoffs will be discussed. Specifically, web based vs. local machine interfaces and nimble, configurable vs. rigid, established workflows.
        Speaker: Dr Garrett Granroth (Oak Ridge National Laboratory)
      • 10
        Savu: Tomography reconstruction and processing pipeline.
        Savu is an open-source, portable, tomography reconstruction and processing pipeline developed at the Diamond Light Source as a post-processing tool for tomography data collected with a parallel beam geometry. Written in object-oriented python, it runs in parallel using mpi-based cluster computing or serially on a PC. The parallel HDF5 backend handles big data, and the design allows processing of multi-modal, n-dimensional data. The actual processing is performed by plugins, which are abstracted from the rest of the framework that handles the movement of the data and controls the plugins. Each plugin performs an independent processing step (such as filtering or reconstruction). From a developer perspective, both existing (python packages, C/C++ code) and new functionality is easy to integrate, as plugins are stand-alone and only need to provide the framework with information detailing the amount and type of data (e.g. projection or sinogram), that it would like to receive. From a user perspective, a simple process list is required, which specifies the order of plugins that should be applied to their data. Process lists are provided to the user, ensuring they do not require any knowledge of tomography data processing, and these lists can be tailored specifically to their data. More advanced users and beamline staff can experiment with different process lists containing a series of plugins chosen from the plugin repository.
        Speaker: Dr Nicola Wadeson (Diamond Light Source UK)
        Slides
      • 11
        BDAE: Easy Parallel Scientific Data Analysis
        In business, Big Data analysis is most often managed with the Hadoop system[1]. The popularity of Hadoop has made the system well known and widely supported. Thus as may be expected, the use of Hadoop for scientific data analysis has also been widely investigated[2][3]. Hadoop however, has several design choices that makes is less than optimal for scientific data processing. Most importantly Hadoop, and the underlying file system HDFS, assumes that data is a sequence of bytes, while in science most data is not one dimensional and splitting data at arbitrary points means that structures of higher dimensions, 2D images, 3D volumes or NetCDF data is split across two nodes and processing thus require inter-node communication during data processing. In addition, Hadoop requires analysis scripts to be written in Java, which is perfectly possible for scientific data, but most often not the most convenient programming language. In this talk we introduce the Big Data Analysis Engine, BDAE(/b’dei/). In BDAE data is saved in a way that preserves the semantic structure of the data, i.e. images or volumes are kept at a single node, and NetCDF data is kept at a one node for individual records, but different fields are split into different datasets, i.e. climate data is split between temperature, pressure, … so that the entire dataset need not be traversed to do analysis on a single component. Rather than Java BDAE scripts are written in Python, and since the data structures are kept in BDAE, the programming interface is much simpler; a programmer can have all elements in a dataset presented as simple iterator, i.e.; for temp in temperature:… Data in BDAE is seamlessly distributed across the storage nodes in a data-analysis cluster, round-robin for datasets of unknown size at the time the dataset is created, i.e. streaming input, and in equal size chunks for datasets where the total size is known at creation time, i.e. a NetCDF file that is imported into BDAE. Datasets may be replicated within a BDAE cluster, this means that data that must be kept highly available may be replicated at two or more nodes, while other datasets are not replicated. Replicated and non-replicated data coexists on the same cluster, this is also a unique feature for BDAE. The talk will introduce BDAE, some internal features and show examples of how parallel data analysis scripts are easily written without the programmer being exposed to parallelism or data distribution. Examples include image analysis, tomographic reconstruction as in Fig 1, and statistics on a NetCDF dataset. ![Fig. 1 BDAE tomographic reconstruction. Middle level is a sketch to indicate a partial volume. ][1] References 1. Apache Software Foundation. Hadoop. 2. Fadika, Zacharia, et al. "Evaluating hadoop for data-intensive scientific operations." Cloud Computing (CLOUD), 2012 IEEE 5th International Conference on. IEEE, 2012. 3. Buck, Joe B., et al. "SciHadoop: array-based query processing in Hadoop."Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis. ACM, 2011. [1]: https://sid.erda.dk/seafile/f/6a1344be6a/
        Speaker: Prof. Brian Vinter (Niels Bohr Institute, University of Copenhagen)
      • 12
        ESS View on SasView: Small Angle Scattering data analysis within the SINE2020 project
        SasView is a well-established open source, collaboratively developed software for the analysis and the modeling of small angle scattering (SAS) data. The core functionality of SasView includes the fitting of model functions, pair-distance distribution function inversion and model-independent analysis. SasView provides a large collection of form and structure factors and with the recently introduced modularization allows for easy incorporation of user-defined models. The European Spallation Source (ESS) has during the last years taken an active role in supporting SasView with the aim of providing it for ESS users from the start of operation. To increase this effort and as part of the EU funded Horizon2020 project - SINE2020, ESS also employs two full time SasView developers. The aim of the project is to deliver inter-operable versatile, robust, reliable, maintainable and sustainable data analysis software that can be used by all the involved neutron scattering facilities (i.e. ESS, ILL, ISIS, LLB, MLZ, and PSI). Here we present, how the SINE2020 project enables the development of new features, code refactoring, GUI re-design and optimization for faster analysis methods by use of Graphical Processing Units. We also discuss an anticipated outcome of the project, which is a better user experience and make SasView a potential tool for live analysis of SAS data.
        Speaker: Dr Wojciech Potrzebowski (European Spallation Source ERIC, Data Management and Software Center)
      • 13
        Data Processing With DAWN at the Diamond Light Source
        DAWN is a free, multiplatform, open source, data analysis workbench built from the core mathematical and visualisation components of Diamond’s data acquisition software GDA (Generic Data Acquisition). Over the last few years, major new features have been added to DAWN to attempt to meet the diverse data processing requirements across the beamlines at Diamond, initially focusing on 2D powder diffraction/small angle scattering (PXRD/SAXS) experiments. The aim of these new features is to lower the barriers to new users, improve the provenance of processed data and give a consistent data processing experience across the PXRD/SAXS beamlines. Currently a headless version of DAWN is being tested for automatic data processing of these experiments. The new Single-Write-Multiple-Read (SWMR) feature of the HDF5 file format allows the data processing to read the data from the file as it is being written, allowing near real time processing. The decoupled architecture of the DAWN processing framework allows all the processing steps currently available to work seamless with SWMR, with near real-time processing successfully tested on several beamlines.
        Speaker: Dr Jacob Filik (Diamond Light Source)
        Slides
      • 14
        Portable Parallelization with the Bohrium Runtime System
        The Bohrium runtime system exploits an array ­programming approach from a high-­level language to extract parallelism and accelerate execution on a variety of hardware setups. By presenting the user with an array ­programming abstraction, it is possible to extract a high level of parallelism without requiring the programmer to adjust the program to fit the current execution environment. Bohrium can use the NumPy library as the implementation of an array­ programming approach and offload computations to the desired hardware. With such a setup, the programmer can develop the applications and algorithms entirely within a familiar environment, and later decide to use the Bohrium runtime system to accelerate the execution with GPGPUs or with a cluster installation. In some ways this is similar to Chapel and other languages, however Bohrium does not require a specific language, instead it plugs into existing programming languages as a library. This removes the need for special toolchains, libraries and other surrounding support entities that are required for a separate language. As each language integration simply calls into a standard C interface, there is no dependence on the execution model from the programming language. This allows the programmer to switch the execution target through a configuration file or environment variable without making any modifications to the source program. Once a program is running and calling the Bohrium library, all requested operations are encoded as array byte­code operations, and collected for execution in a lazy­ evaluation manner. Once a result is required by the top ­level program, the collected array byte­codes are rearranged to fit with the current target architecture constraints before being passed on. The actual execution is performed in a manner that seeks to optimize for the characteristics on the actual execution device. For the GPGPU backend, one such optimization is to schedule data transfer such that they overlap computations and thus hide the latency inherent in the GPGPU communication. For the CPU backend, this means distributing data in a NUMA-­aware fashion to exploit the full memory bandwidth as well as utilizing JIT compilation and OpenMP to execute the bytecode sequence. First we present the current state of the Bohrium project, including features, performance measurements and caveats. Then we present the major ongoing projects within the Bohrium system: the Niels language, multi-core optimizations, Xeon Phi execution, FPGA execution, GPGPU targets, linear algebra packages, distributed setups and more.
        Speaker: Mads Kristensen (Niels Bohr Institute, UCPH)
        Slides
      • 15
        Virtual NanoLab, an open and commercial graphical user interface for scientific simulation and analysis
        The Virtual NanoLab (VNL) was initially developed as a Graphical User Interface (GUI) to QuantumWise’s atomic-scale simulation tools, the Atomistix ToolKit (ATK). In the last years the GUI has been transformed into a platform which through a number of open source plugins can be interfaced to almost any scientific simulation software. The software is now free for academic researchers. In this presentation I will discuss the design principles behind VNL, in particular how it integrates with the python language and allow generation and interpretation of python scripts. I will show the different components of the platform, the atomic builder, python scripter, job manager and analysis modules, and how each of them can be extended through python plugins. The VNL is now widely used in the atomic-scale simulation community not only with QuantumWise’s own simulation software but has also been interfaced to a number of popular community codes, like VASP, Quantum Espresso, GPAW and Abinit. I will also present the software tools and agile methodologies used at QuantumWise. These are based on the best practices from the software industry.
        Speaker: Dr Jess Wellendorff (QuantumWise A/S)
    • 3:05 PM
      Coffee Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Contributions 3 Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 16
        SIMEX: Simulation of Experiments at Advanced Laser Light Sources
        Realistic simulations of experiments at large scale photon facilities, such as optical laser laboratories, synchrotrons, and Free Electron Lasers, are of vital importance for the successful preparation, execution, and analysis of these experiments investigating ever more complex physical systems, e.g. biomolecules, complex materials, and ultrashort lived states of highly excited matter. Traditional photon science modeling takes into account only isolated aspects of an experiment, such as the beam propagation, the photon-matter interaction, or the scattering process, making idealized assumptions about the remaining parts, e.g. the source spectrum, temporal structure and coherence properties of the photon beam, or the detector response. In SIMEX, we have implemented a platform for complete start-to-end simulations, following the radiation from the source, through the beam transport optics to the sample or target under investigation, its interaction with and scattering from the sample and its registration in a photon detector, including a realistic model of the detector response to the radiation. Data analysis tools can be hooked up to the modeling pipeline easily. This allows researchers and facility operators to simulate their experiments and instruments in real life scenarios, identify promising and unattainable regions of the parameter space and ultimately make better use of expensive beamtime. Our software consists of a generic backbone defining the user and data interfaces to Calculators which are responsible for the simulation of the segments in the virtual beamline. A number of specific Calculators for the photon source, photon propagation, photon-matter interaction, photon scattering, photon detection, and photon data analysis are pre-installed. Further contributed Calculators can easily be integrated by inheriting from the abstract interfaces and providing only a few, well defined interface methods. A common data format description facilitates the data exchange among simulation codes. In this paper, we describe the general structure and implementation of the SIMEX software and discuss a number of applications: Modeling of single particle imaging at the European X-Ray Free Electron Laser (XFEL), a pump-and probe experiment with ultrashort pulsed optical laser excitation of a metal foil and subsequent probing by small angle scattering with coherent XFEL radiation, and a long pulse optical laser shock compression experiment probed by synchrotron radiation.
        Speaker: Dr Carsten Fortmann-Grote (European XFEL GmbH, Schenefeld, Germany)
        Proceedings
        Slides
      • 17
        Integrating software: SASview, McStas and Mantid for powerful virtual SANS experiments
        **Abstract:** The topic of this presentation is an integration task, based on the three well-known neutron scattering softwares [McStas][1], [SASview][2] and [Mantid][3]. We will show a "full-circle" virtual SANS experiment where we combine - An arbitrary SASview scattering kernel - Instrument resolution realism from McStas - ToF event analysis in Mantid - Re-retrieve the initial simulation parameters in a SASview analysis We believe that the combination of the three listed software proves to be powerful, and even if integration tasks are sometimes considered trivial exercises, they can be fully worth the effort in terms of new functionality that was not envisioned before. We further believe that the presented prototype is the first of a series of methods, that will bring instrument resolution realism to data analysis in other parts of neutron scattering, e.g. in diffraction and inelastic scattering. The discussed integration is a result of a collaboration between DTU Physics, ESS DMSC and ISIS STFC, aiming to support the LOKI instrument development efforts at ESS. [1]: http://www.mcstas.org [2]: http://www.sasview.org [3]: http://www.mantidproject.org
        Speaker: Mr Peter Willendrup (DTU Physics)
      • 18
        Using Docker containers for photon experiment simulations in HPC environments
        Traditionally, virtual machines are used to emulate a specific operating system, providing complete isolation from the underlying hardware/software. They are used, for example, by system administrators to optimize resource usage allowing several isolated services be executed on different virtual machines running on the same physical machine, by DevOps to provide a desired environment for software testing, by cloud orchestration engines to provide requested resources, etc. The main drawbacks of such full virtual machines are quite large size of the image file, significant (up to several minutes) startup/shutdown time and high virtualization overhead. Container virtualization, also called operating-system level virtualization, addresses these drawbacks by shifting virtualization from OS to the application layer. In this approach, the operating system's kernel is shared between all virtual machines running on top of it. The virtual machine (called container in this case) is usually supposed to run only one application. The container image is hence comparably small and contains only limited amount of system packages, the application and all of its dependencies. Such an approach allows to start/stop a container in a matter of seconds and produces almost no overhead. Typically, containers are used to deploy various isolated web services that are then communicating with each other via a selected protocol (usually HTTP). Orchestration engines like for example Kubernetes or Mesos can be used to control these services. The most popular implementation of container technology is an open-source containerization platform Docker, which is also used in the present work. All of the advantages of containers - small-size, low overhead, portability - can be used not only for web services but for also all kind of other applications, including scientific software. A developer creates an application and deploys it inside a Docker container. Docker provides an efficient and secure application portability across various environments and operating systems. It can run on any infrastructure, whether it is a single machine, a cluster or a cloud without need to learn new environment, install additional libraries, resolve dependencies, recompile application. Additional efforts are required to deploy container-based applications in a high performance computing (HPC) environment. There is some ongoing work but no standard solutions available yet, therefore our own approach will be presented; an approach which allows deploying containerized applications in HPC environments without overhead or performance penalties. The suggested implementation is demonstrated for the photon experiment simulation platform SimEx, which implements a full start-to-end simulation of experiments at various light sources like Free Electron Lasers. The simulations track the photons on their way from the source through the optics and the interaction region, all the way to the detector. Samples range from weakly scattering biomolecules, density modulations following laser–matter interaction to dynamically compressed matter at conditions similar to planetary cores. SimEx simulations may take weeks to finish when run on a single machine. Therefore efficient parallelization of the code to run simulations on a HPC cluster is vital for getting results in a reasonable wall-clock time. After parallelization, the SimEx can be "dockerized" (a Docker container image containing the application can be built) and run on an HPC cluster. The presentation will show parallel performance results as well as comparisons between bare-metal and virtualized simulations.
        Speaker: Sergey Yakubov (DESY)
        Slides
      • 19
        Advanced Visualization Capabilities for Neutron Scattering Data
        Large multidimensional neutron scattering datasets are routinely collected at neutron scattering facilities. Compared to other imaging tasks, the unique characteristics of neutron scattering data including binning along non-orthogonal axes and the need to distinguish between points with zero intensity and ones without detector coverage. Meeting user expectations to interactively visualize and manipulate multi-gigabyte datasets from at the beamline and their home institution at full resolution is a challenging task requiring special considerations. In this talk, I’ll discuss work we are undertaking to facilitate access to remote HPC resources capable of volume rendering with interactive frame rates. I’ll also talk about changes to data structures in the Mantid VSI that improve performance & reduce memory consumption. Examples of these improvements will be shown using data taken on various ORNL neutron scattering instruments. This research used resources at the High Flux Isotope Reactor and Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory.
        Speaker: Steven Hahn (Oak Ridge National Laboratory)
        Slides
    • 4:50 PM
      Group Photo Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Posters Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 20
        "Manyo-Lib" Object-Oriented Data Analysis Framework for Neutron Scattering
        We report the current status of data-analysis software environment at Materials and Life Science Facility (MLF) of Japan Proton Accelerator Research Complex (J-PARC). MLF is a user facility which provides neutron and muon sources for experiments. The basic concept of analysis environment for the neutron scattering instruments is to provide a software framework that has common and generic analysis functionalities for neutron scattering experiments. The framework, “Manyo Library” is a C++ class library which can be worked on Python environment. Manyo-Lib is a software infrastructure and provides the libraries for building data input/output functions, data-analysis functions, and network-distributed data processing environment, etc. User's applications software and data-analysis software have been developed for each instrument by adopting the framework. The 20 neutron-scattering spectrometers have been installed to MLF, and the data-analysis softwares developed on Manyo-Library are working on the 16 spectrometers. Raw data files in the event-data format are filled into two- or three-dimensional histograms with error values constructed on the data containers. Many data-analysis operators for the histograms, the four arithmetic operations and so on, with error propagations are prepared, and the operators are working with OpenMP(http://openmp.org/wp/). The histogram data in the containers can be converted into NeXus format files (http://www.nexusformat.org/). In FY 2015 and 2016 we have increased the efficiency of the data conversion, and we will show the result in the presentation.
        Speaker: Dr Jiro Suzuki (KEK, J-PARC)
        Proceedings
      • 21
        "Pixelator" Instrument Control and Data Acquisition System for Scanning Spectro-Microscopy
        Scanning transmission X-ray spectro-microscopy (STXM) is a synchrotron-based materials analysis technique that provides high resolution imaging with strong, natural contrast based on various X-ray spectroscopic effects. STXM experiments involve focusing a monochromatic X-ray beam onto a thin sample and measuring the amount of transmitted radiation - this provides a single pixel of an image that is built up by raster-scanning the sample and measuring pixel values one-by-one. Typical data sets involve a set of images at different photon energies (a data-cube of two spatial dimensions and one spectroscopic dimensions). While slower than full-field imaging, the simple set-up allows easier access to spectroscopy, other detector types for sensitivity to the sample surface (electron detection) or the presence of trace elements (fluorescence), and special sample environments. Precision and reproducibility of the scanning system is provided by interferometric measurement of the sample position, with feedback to the scanning stages. Software to control such instruments therefore require a great deal of flexibility, while also minimising overhead in order to keep the scan speed reasonably fast. The *Pixelator* STXM control software has been developed by a collaboration between the *Paul Scherrer Institut*, the *Max-Planck-Institut für Intelligente Systeme* and *Semafor Informatik & Energie AG*. It consists of: 1. an *Orchestra Control Engine* realtime system that tightly integrates the time-critical positioning, feedback and measurement systems, 2. a C/C++ server that handles scanning logic, control of the non-critical actuators (often via EPICS or similar) and writing/reading data to/from a NeXus compliant HDF5 format, and 3. a user-friendly Qt-based GUI that communicates with the server via simple JSON-format commands. A python scripting environment can also be used to generate the same commands for the server. This presentation will discuss the structure and strategies of the *Pixelator* system along with some performance data and example data.
        Speaker: Dr Benjamin Watts (Paul Scherrer Institute)
      • 22
        A New Design for Live Neutron Event Data Visualisation for ISIS and ESS
        Viewing and reducing event data “live” during an experiment is an increasingly important part of experiment control. At ISIS we had developed a simple TCP streamer mechanism that allowed data to be processed by Mantid during an experiment, but this was limited to just the neutron events and was likely to encounter performance issues with our new generation acquisition hardware. As part of an ESS in-kind collaboration, ISIS is working on developing data streaming technologies for managing and delivering such data to clients. It was thus a good opportunity to replace our old streaming system with this developing technology, not only to provide enhanced functionality now for ISIS but also to provide a prototype for developing and testing ESS requirements.
        Speakers: Dr Frederick Akeroyd (STFC), Matthew Jones (ISIS/RAL)
      • 23
        A new paradigm for data analysis workflows
        There are many data analysis packages for the different X-ray and neutron scattering techniques; for small-angle scattering (SAXS/SANS) alone there are something in the region of 50! [1] These packages bring many challenges, but most notably the challenges of: sustaining development *and support*, providing for *and maintaining* cross-platform deployment, utilising multi-processor compute resources as and when necessary, and all the time trying to keep things simple for the end user! Community-developed software can of course address many of these challenges, particularly if managed well, but often the simplicity required by the new or non-expert end user still gets lost along the way. The cold reality is many users simply do not want to have to install and update packages locally, or worry about whether their computer will have sufficient cores or memory. So how much better it would be if they could have the applications they need, in one place, running on a machine that is up to the job, within an environment that is both familiar and gives structure to their workflow? Here we introduce *GenApp* [2], a target-agnostic infrastructure for the creation and deployment of UIs for underlying executables which is also integrated with Apache Airavata, and illustrate its use to provide the *SASSIE-Web* [3] framework for the ‘atomistic’ modelling of solution structures from SAXS/SANS data. However, it is anticipated that *GenApp* will be of use in a wide-range of scientific applications. This work has been funded by a joint grant from the UK EPSRC (EP/K039121/1) and US NSF (CHE-1265821) in support of the CCP-SAS project. [4] [1] http://www.smallangle.org [2] Brookes, E.H., Anjum, N., Curtis, J.E., Marru, S., Singh, R. & Pierce, M. The GenApp framework integrated with Airavata for managed compute resource submissions. *Concurrency Comput. Pract. Exper.* (2015), 27, 4292–4303. [3] https://sassie-web.chem.utk.edu/sassie2/ [4] http://www.ccpsas.org/
        Speaker: Dr Stephen King (ISIS Pulsed Neutron and Muon Source)
        Poster
      • 24
        A novel computational method for X-ray fluorescence data and its deployment in the workflow of a synchrotron beamline
        We present a concrete example of research and development of a set of computational methods and the challenges faced for integrating them in the control system and workflow of a synchrotron beamline in operation. An important computational operation in a set of spectra is that of aligning them to a reference spectrum. In X-ray Fluorescence this is referred to as energy calibration and may be necessary for fitting low count acquisitions. Typically this is done in a linear manner and sometimes requires user feedback. Automated methods exist but are often focused to specific type of data. We have recently presented [EXRS2016][1] such a new automated method that is based on a non-linear approach and is specialised for XRF data. The initial application in two different multi-element detector systems in the beamline [TwinMic][2] (Elettra - Sincrotrone Trieste) yielded promising results. The software is open source. This poster presentation aims at introducing the method and highlighting the software strategy we follow in order to integrate it to the workflow of an operating beamline. This should serve as a case study where a novel computational method is introduced to the standard information workflow of a lab. [1]: http://www.exrs2016.se/ "European Conference on X-Ray Spectrometry (EXRS2016) Gothenburg, Sweden" [2]: https://www.elettra.eu/elettra-beamlines/twinmic.html
        Speakers: Dr Georgios Kourousias (Elettra - Sincrotrone Trieste), Mr Roberto Borghes (Elettra Sincrotrone Trieste)
        Poster
      • 25
        Automated Pair-Distribution Function Data Processing
        The XPDF project aims to provide a hands-off data service for users to produce X-ray Pair Distribution Function data. The software will be accessible to non-expert users, allowing scientists from a variety of disciplines to get PDF data without knowledge of the details of X-ray powder diffraction and with minimal intervention from beamline staff. The software for the project uses several of the technologies already developed by Diamond Light Source and other facilities to allow the automatic processing of X-ray powder diffraction data, including Dawn (Data Analysis Workbench), GDA (Generic Data Acquisition), iSPyB and SynchWeb. The interactions of these different applications and the greater XPDF software project will be presented.
        Speaker: Dr Timothy Spain (Diamond Light Source Ltd.)
        Poster
      • 26
        Automatic Neutron Radiography detector focussing and field of view adjustment using image processing algorithms
        Neutron imaging is becoming an increasingly important diagnostic technique due to its complementary nature to X-ray imaging. The technique is employed in a wide range of applications in science and engineering [1–2]. Focussing of the CCD camera based neutron radiography (N-Rad) detector system is one of the keys to optimal data acquisition required for successful analysis of radiographs. The detector development plays an important role in high resolution neutron radiography and tomography. Several approaches for high-resolution neutron imaging have recently been developed, including the use of specialized micro-focusing optical devices [3]. Kardjilov et al. shows that the Cold Neutron Radiography and tomography station (CONRAD) facility in Germany uses a motorized camera stage for fine-tuning of the object distance in order to focus the image [4]. The N-Rad facility located at the SAFARI-1 research reactor in South Africa is currently being upgraded to include a state of art detector system with improved control over the field of view adjustment and lens focusing. In this presentation, our approach to the N-Rad detector system upgrade will be described. This involves automation by means of image processing software followed by the electromechanical adjustment of the field of view and lens focussing using micro-adjustment devices, thereby eradicating tedious manual time-consuming process. **References** [1] Banhart, J., Borbély, A., Dzieciol, K., Garcia-Moreno, F., Manke, I., Kardjilov, N., Kaysser-Pyzalla, A.R., Strobl, M., Treimer, W.(2010) X-ray and neutron imaging – Complementary techniques for materials science and engineering. International Journal of Materials Research (formerly Zeitschrift fuer Metallkunde) 101(9):1069-1079 [2] Kardjilov, N., Manke, I., Hilger, A., Strobl, M., Banhart, J.(2011) Neutron imaging in materials science. Materials Today 14(6) [3] Lehmann, E.H., Frei, G., Kuhne, G., Boillat, P. (2007) The micro-setup for neutron imaging: A major step forward to improve the spatial resolution Nuclear Instruments & Methods in Physics Research Section a-Accelerators Spectrometers Detectors and Associated Equipment 576 (2–3): 389–396 [4] Kardjilov, N., Dawson, M., Hilger, A., Manke, I., Strobl, M., Penumadu, D.,Kim F.H., Garchia Moreno, F.,Banhart, J. (2011). A highly adaptive detector system for high resolution neutron imaging. Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, 651(1):95-99.
        Speaker: Mr Evens Moraba (Necsa SOC Limited)
      • 27
        Chopper Control at the ISIS Pulsed Neutron and Muon Source
        ISIS has a variety of neutron choppers present across the instrument suite, and these need to be remotely controllable by the scientists performing experiments. We will cover the various chopper models currently in use, how they are controlled (on both LabVIEW and EPICS based instruments), and the interface provided to the end user.
        Speaker: Mr David Keymer (STFC)
      • 28
        Comparing local minimizers for fitting neutron and muon data with the Mantid framework
        Fitting is the process of trying to fit a mathematical model or function to some data, where the data may originate from measurements at beamlines or simulations. A simple example could be the problem of fitting a polynomial background function plus a set of peak functions to a spectrum. Fitting is commonly a core functionality in neutron, muon and x-ray data reduction and analysis software packages. It is required in tasks such as instrument calibration, refinement of structures, and various data analysis tasks specific to different scientific techniques. The [Mantid software project](http://www.mantidproject.org) provides an extensible framework that supports high-performance computing for data manipulation, analysis and visualisation of scientific data. It is primarily used for neutron and muon data at several facilities worldwide. One of the core sub-systems of Mantid is the curve fitting system. Mantid also includes generic and technique-specific fitting graphical user interfaces. The Mantid fitting system offers a great deal of flexibility in that it is possible to add and combine different functions, minimizers, types of constraints, and cost functions as plug-ins. Users can apply different combinations of these elements through the same user interface either via scripting (commands and algorithms) or graphical user interfaces. In addition, some of these elements, such as functions, are easy to add as plug-ins via scripting in Python. Minimizers play a central role when fitting a function to experimental or simulated data. The minimizer is the method that adjusts the function parameters so that the model fits the data as closely as possible, whereas the cost function defines the concept of how close a fit is to the data. Local minimizers are widely used to fit neutron & muon data. Several local minimizers are supported in Mantid (as in other software packages used in the neutron, muon and x-rays community). However there is a lack of openly available comparisons. We have included in the latest release of Mantid (v3.7) [a comparison](http://docs.mantidproject.org/nightly/concepts/FittingMinimizers.html) of the performance of 8 different minimizers in terms of goodness of fit (chi-squared or similar statistics) and run time. The comparison has been done against the [NIST nonlinear regression problems](http://itl.nist.gov/div898/strd/general/dataarchive.html). This can inform users and developers as to: - What performance can be expected from different minimizers in relative terms and what alternatives might be more appropiate for different applications. - How modified minimizer methods or newly added ones perform as compared to already available alternatives. For the next releases of Mantid we plan to extend the comparison with test problems from neutron and muon data, considering different scientific areas, and also further visualization of fitting results. Furthermore, on the basis of our comparisons, we intend to incorporate a new, flexible minimizer, RAL-NLLS, whose aim is to improve the reliability and broaden the functionality of the Mantid fitting system.
        Speaker: Dr Anders Markvardsen (ISIS neutron & muon source, STFC)
      • 29
        Data analysis platform in support of a Cryo-Electron Microscopy facility
        eBIC (electron Bio-Imaging Centre, http://diamond.ac.uk/Science/Integrated-facilities/eBIC.html) at Diamond Light Source provides scientists with state-of-the-art experimental equipment and expertise in the field of cryo-electron microscopy, for single particle analysis and cryo-tomography. Currently two powerful cryo-electron microscopes allow users to investigate the structure of individual cells and to visualise single bio-molecules, exploiting techniques that are rarely available at home laboratories. We are integrating and automating pipelines to help users perform both on-the-fly tomography and single particle analysis. The raw data is typically large, producing gigabytes of data for both tomography tilt series and single particle analysis each hour. On top of this, because radiation dose rates often have to be kept low, the data is very noisy and requires a lot of processing to produce high resolution 3d reconstructions of the original samples. Tomography data is being processed using custom automation around the IMOD (http://bio3d.colorado.edu/imod/) software. Single particle analysis automation will be based on the Scipion framework (http://scipion.cnb.csic.es/) using a variety of underlying tools. The software will run on our in house cluster, utilising our parallel file system and high performance network. We are also integrating with our ISPyB database and plan to add data input from Scipion later this year. This means we can expose results though the SynchWeb interface which will allow users to browse and search through their results as processing steps complete as well as track the full sample lifecycle including sample shipping and data collection.
        Speaker: Mr Kevin Savage (Diamond Light Source)
      • 30
        Data management system of China Spallation Neutron Source
        The design philosophy of data management system for China Spallation Neutron Source (CSNS) is data safety, user convenience, and big-data sharing. Under the considerations, the four main parts are covered: data transfer, data storage, metadata catalogue, and data access. The high-speed and highly reliable data transfer service is developed. It will handle with the data delivery among several locations: local beamlines, computing center on site, remote mirrors for disaster backup or supercomputing. It will favor the high-performance computing and big-data analysis. Meanwhile, multi-level hierarchies including hot/cold/backup zones are introduced in data storage, in order to apply the different data retrieval policies and grant permissions. The metadata catalogue is completed based on ICAT (http://www.icatproject.org/). A load-balanced ICAT cluster with read/write splitting is used to ensure safety and stability of database. The web-based data portal will provide users data access every time and everywhere, including metadata indexing, data download/upload, NeXus data browser and visualization in one-stop.
        Speaker: Mr Ming Tang (Institute of High Energy Physics)
      • 31
        Data Reduction and Simulation for Novel Detector Geometries in Mantid
        L. Moore1, O. Arnold1,2, K. Kanaki3, T. Nielsen3, J. Taylor3, M. Hart1 1. ISIS Facility, Rutherford Appleton Laboratory, Didcot, UK 2. Tessella plc, Abingdon, Oxfordshire, UK 3. European Spallation Source, Lund, Sweden The LOKI instrument, for broadband Small Angle Neutron Scattering, is currently under development for the European Spallation Source in Lund, Sweden. This instrument is planned to use a trapezoidal array of Band-Gem **[1]** (Boron Array Neutron Detectors, Gas Electron Multiplier) detectors in a large irregular grid pattern. The assembly gives higher pixilation at low scattering angles, with increasing size at larger scattering angles. Each detector panel will be separated into 8 sectors each of which contain between 500-1000 individual (uniquely shaped) detectors. There will be three panels in total which translates to circa 2700 detectors. Presently, the LOKI instrument is in its concept stage with many development iterations of a proposed implementation of the physical BAND-GEM detector network and associated hardware. There have also been efforts on the part of the detector group at the ESS in prototyping instrument behaviour using McStas simulations [2]. McStas is a neutron ray-tracing simulation tool for neutron scattering instruments and experiments. The outputs of these simulations have provided a basis for development work on prototyping the data reduction workflow for this instrument. This enables the development and testing of the data reduction process in the absence of a physical instrument. The Mantid framework [3] has been chosen by the ESS to be the main data reduction service. This is currently in use at ISIS, SNS and ILL. Mantid contains an in-memory virtual instrument representation. This supports geometric calculations critical to time-of-flight neutron data reduction. These virtual instruments can be defined using an XML format known as the instrument definition file (IDF). There has also been considerable effort in the past to ensure interoperability between McStas and Mantid, with existing capability allowing the export of McStas data and instruments to Mantid-readable formats. One of the major challenges in the design of the data reduction workflow is LOKI’s irregular detector geometry, for which there was no sensible compatibility within the Mantid framework. Naïve LOKI IDF implementations, based on a historical approach to defining detectors, resulted in major performance issues, and practically made Mantid unusable for the proposed geometry. Amongst many incremental improvements, a new design for topologically regular, but geometrically irregular detector geometries has been provided. This arrangement is known as the StructuredDetector and enables faster, more efficient loading, as well as runtime processing, of the LOKI virtual instrument in Mantid. This poster will highlight the changes to the Mantid Framework which enabled the inclusion of this additional geometry type and the success of initial processing of LOKI geometries quickly and efficiently. We will also discuss further work undertaken to increase interoperability between the Mantid and McStas frameworks in order to bridge the gap between the simulations and the data reduction processes, and how these processes function together. ###References 1. http://cerncourier.com/cws/article/cern/27921 2. http://www.mcstas.org/ 3. http://www.mantidproject.org/ Email: **lamar.moore@stfc.ac.uk**
        Speaker: Mr Lamar Moore (STFC/ISIS)
      • 32
        Data Reduction at the ILL: A Comparison Between Mantid and Lamp
        The ILL have begun a project to phase out the use of their existing data reduction software, LAMP [1], and begin to use the Mantid Framework [2]. Some of the first work being undertaken in the Mantid adoption project is to compare the reduction output between LAMP and Mantid. In some cases, such as loading of the raw data, an identical output could be expected. In other cases it might be expected that there will be differences in the data, but where this is the case the reasons for such differences should be understood. LAMP has been in use for over 20 years at the ILL, and there are a number of differences in the workflows instrument scientists use when compared with the other facilities that use Mantid. This has led to a number of changes and additions in the Mantid software from the ILL. We will show some of these that were required to make the comparisons, and how we have kept such algorithm changes as general as possible, so they can be used by other facilities. Initial efforts Mantid integration at the ILL has been for Time-of-Flight (IN4, IN5 and IN6) and Backscattering (IN16B) spectroscopy at the ILL. In this presentation we will present some details of the work comparing the output of LAMP and Mantid for some of these instruments. We discuss some of the different approaches in use between LAMP and Mantid for various reduction routines, for example in the S(Q, ω) conversion. We will also summarise the efficiency of the efficiency of the different approaches for some of the more computationally expensive algorithms. [1] https://www.ill.eu/instruments-support/computing-for-science/cs-software/all-software/lamp/the-lamp-book/ [2] http://www.mantidproject.org/
        Speakers: Dr Antti Soininen (ILL), Dr Gagik Vardanyan (ILL), Dr Ian Bush (Tessella / ILL), Mrs Verena Reimund (ILL)
      • 33
        Development of a new integrated and streamlined data process at HFIR Bio-SANS
        How do we leverage the sample, the sample environment, and the instrument configuration data that is available in various systems to assist data processing? How do we plan an experiment and characterize corresponding scan modes like “transmission”, and scan types like “sample”, “background” and “media” to support metadata mining? How do we use cataloged metadata to enable automated data reduction? We will present our work conducted at HFIR Bio-SANS on how our software and instrument scientists use metadata-driven techniques to streamline the data process from experiment planning, data acquisition, and data cataloging, to data reduction. Data reduction is both challenging and time consuming, and an integrated and streamline data process could make these efforts more transparent to users.
        Speaker: Dr Shelly Ren (ORNL)
      • 34
        Development, testing and deployment of the ESS data aggregation and streaming software
        A data aggregation and streaming software system is being developed at the ESS Data Management and Software Centre as part of the BrightnESS project, with ISIS as an in-kind partner. Data produced by sources such as detector event formation systems, sample environment, motion control and choppers will be aggregated by the system, to be consumed by software performing tasks including live data reduction and file writing. The design is based on Apache Kafka, using Google FlatBuffers for serialisation. An integration laboratory where equipment and software can be deployed is available at ESS. We will present our approach to the development and automated testing and deployment of the multiple system components.
        Speaker: Afonso Mukai (European Spallation Source ERIC)
        Poster
      • 35
        Electronic Notebook for Neutron Scattering Experiments
        Tired of managing piles of paper logbooks for your instrument? Now you can make instrument journals electronically. The Electronic Notebook application developed by the Gumtree team is available to help instrument scientists to write and to manage their instrument logs. This application has been used by the SANS instruments in ANSTO. And it is accepted as an easy, standardised, and secure way of writing their instrument journals.
        Speaker: Mr David Mannicke (ANSTO)
      • 36
        EPICS Qt GUI Based Applications at the Australian Synchrotron
        The EPICS Qt Framework is used at the Australian Synchrotron to build GUI Applications for the control of beamlines. This includes EPICS-aware widgets, as well as a simple XML based definition of the application menu structure which allows customized applications to be constructed easily and quickly without the need for writing code. By creating reusable Qt components and forms to support specific types of applications (such as motion control, scanning or imaging), user interfaces that are consistent and standardized across beamlines can be easily developed. Several examples are presented, including Qt components to interact with the EPICS motor record, as well as a flexible interface to the areaDetector framework which can be easily customized to add new detectors
        Speaker: Mr Paul Martin (Australian Synchrotron)
      • 37
        Event classification and performance diagnostics software for GEM neutron detectors
        The Macromolecular Diffractometer (NMX) instrument at the European Spallation Source will use novel Gadolinium gas electron multiplier (GEM) detectors for improved efficiency and spatial resolution. In the data acquisition chain, a software event formation step will deduce position and time of an incident neutron from charge deposition data by means of the time projection chamber (uTPC) technique. As part of the Horizon 2020 project BrightnESS, charge deposition events from detector trails are classified and analyzed according to a variety of criteria, producing a set of quality indicators for each event. In addition to bettering the scientific understanding of how these detectors work, the results will enable the prototyping and evaluation of efficient algorithms to be used in the final data acquisition chain at the NMX station. The software can also form the basis of diagnostic and quality control tools for operations. The results of these analyses will be presented as well as conclusions on best criteria for accepting reliable events.
        Speaker: Martin Shetty (European Spallation Source ERIC)
      • 38
        Event Processing Neutron Powder Diffraction Data with Mantid
        Mantid[1] is a joint collaboration between ESS, ISIS and ORNL. One of the main features of this framework is the possibility to use event data. Event data allows using novel techniques, such as asynchronous parameter scans, including temperature, magnetic field, and pump probed experiments. Mantid allows for the separation of measurement setup from the way the data is processed and, ultimately, analyzed. For example, a user can measure data while ramping a temperature and select, in reduction, what temperature ranges are grouped together. These capabilities unlock new possibilities for powder diffraction measurements. [1] http://www.mantidproject.org
        Speaker: Dr Peter Peterson (Oak Ridge National Laboratory)
        Poster
      • 39
        From the Dream to Reality: MX at NSLS2 System Administrator Point of View.
        **From the Dream to Reality: Macromolecular Crystallography at NSLS2 System Administrator View.** *Leon Flaks* Brookhaven National Laboratory Bldg 741, NSLS2 BNL Upton, NY 11973 USA Back in 2014 we reported our plans for macromolecular crystallography at NOBUGS 2014. Now it is time to discuss what we have accomplished since that time. Two MX beamlines have seen first light, first samples got measured and we are going through commissioning with operations scheduled to start at the end of this calendar year. We have a new Eiger detector delivered and installed. Computational cluster has its first 10 nodes operational and is growing. About 1PB storage array is up with GPFS file system available and direct link to the beamlines with 40G network interfaces. As we progress towards operations more improvements are underway: workstations network links are upgraded to 10G, fast buffer storage based on PCI with low latency will be added soon. SSD-based storage arrays are under consideration also. Email corresponding author: flaks@bnl.gov
        Speaker: Leonid Flaks (Brookhaven National Laboratory)
      • 40
        Graphical user interface and experiment control software at the MX beamlines at EMBL Hamburg
        The EMBL Hamburg outstation, located at DESY campus (Hamburg Germany), operates two macromolecular crystallography (MX) beamlines P13 and P14. Both beamlines are equipped with high-end instrumentation like adaptive optics, configurable compound refractive lenses, beam conditioning units, kappa goniostats, Pilatus detectors, and automatic sample changers to fulfill the needs of the MX community. For the high-level control of the beamlines and data acquisition, the MXCuBE (Macromolecular Xtallography Customized Beamline Environment, Gabadinho, J. et al. (2010). J. Synchrotron Rad. 17, 700–707.) interface is used. The growing complexity of experiments requires easy and fast adaptation of the control software. The data model and overall architecture of MXCuBE allows implementing new data collection methods and strategies (large scale grid scans, serial crystallography experiments, etc.). EMBL Hamburg is part of the MXCuBE collaboration (http://mxcube.github.io/mxcube/) and actively participates in the software co-development. The overall infrastructure of graphical user interfaces (as running on the Qt4 library), experiment control, on-line data processing and the ISPyB experiment information system at EMBL Hamburg MX beamlines will be presented.
        Speaker: Dr Ivars Karpics (EMBL Hamburg)
        Paper
      • 41
        Improved integration volumes for single crystal diffraction
        There are many methods for integrating single crystal peaks, but most have problems finding the volume in which to integrate the peaks. Also many peaks are near the edge of detectors or between detectors and the integrated intensity is underestimated. New work has been done to improve the integration volumes for peaks from single crystal instruments that have tens of thousands of peaks with many of them weak. The regions around the strong peaks are fit in reciprocal space with a 3D combination of a Gaussian functions and exponentials to fit the center, tail, and background of the peak. These 3D fits are saved and machine learning is used to decide which fit to apply to each of the weak peaks. Integration is done on the 3D fits of the data. The fits and the integration of the peaks are done in parallel. This work will be implemented in Mantid as a new algorithm.
        Speaker: Vickie Lynch (Oak Ridge National Laboratory)
      • 42
        IROHA2: Standard instrument control software framework in MLF, J-PARC
        Neutron and muon experimental instruments in Materials and Life science experimental Facility (MLF), Japan Proton Accelerator Research Complex (J-PARC) carry out many kinds of measurements and produce an enormous number of experimental data, such as raw data and metadata representing measurement conditions, by a lot of external users from not only academic but also industry. It is important to be user-friendly and automated control software to perform efficient measurements. We have developed the MLF standard instrument control software framework, called “IROHA2”. IROHA2 consists of four core software components, i.e. *the device control server* to control and monitor each device, *the instrument management server* to authenticate a user, manage a measurement and configure an instrument setup, *the sequence management server* to do an automatic measurement and *the integrate control server* to unify instrument control and monitoring. Because each software component of IROHA2 has a web interface, we are able to use its all functions under multi-platform environment via a web browser. IROHA2 is also connected to and cooperated with several other systems which are the experimental status system, the MLF integrated authentication system and the MLF business database. We will realize that the users in MLF can do “logbook-less” experiments by IROHA2. In this presentation, we will show the detail of the architecture and implementation method of our control software framework.
        Speaker: Dr Takeshi Nakatani (J-PARC)
        Paper
      • 43
        Karabo, the Control and Analysis System for the European XFEL
        The European XFEL is a 3.4 km long X-ray Free Electron Laser in its final construction and commissioning phase in Hamburg. It will produce spatially coherent X-rays in the energy range between 0.25 keV and 25 keV. The machine will deliver 10 trains/s, consisting of up to 2700 pulses/trains at 4.5MHz repetition rate. In 2015 a first electron beam was produced in the RF-photo-injector and the commissioning of consecutive sections are ongoing. A huge number and variety of devices for the accelerator, beamlines, experiments, cryogenic and facility systems needs to be controlled together. Data acquisition requires a precise timing and synchronisation system. Fast feedbacks from front-ends, the DAQs and online analysis system must be seamlessly integrated and provided for the accelerator and the initial 6 experimental stations. An overview of the XFEL control system, Karabo is presented.
        Speaker: Dr Sandor Brockhauser (EuXFEL)
      • 44
        Latest results and features with McXtrace 1.3
        [McXtrace][1] [1][2] is a Monte Carlo Ray tracing package for performing simulations of any kind of X-ray optical instrumentation or scattering experiment. We present the latest results obtained using the new release McXtrace version 1.3. Some highlights of simulations using McXtrace include: 1. McXtrace in space - simulations of an X-ray telescope satellite ATHENA [3]. (See image) McXtrace is being adopted as the general tool for simulating X-ray optics of the European Space Agency. 2. Ray tracing integrated with the Simex platform [4] for XFEL simulations The SimEX platform is general platform for performing Source-to-End simulations at X-ray Free Electron laser facilites supported by the EU under the EUCALL initiative. We show that McXtrace may interoperate with the SimEX platform, creating a versatile tool with which to explore the new possibilities created by the FEL X-ray sources. 3. A full beamline description of the DanMAX beamline at the MaxIV synchrotron. McXtrace is used to simulate the DanMAX beamline while it is being designed, not only supporting design choices, but in parallel also building a virtual facility. (See image) Exciting new features in the lates release fo McXtrace includes: 1. New device model examples: - A new polyphase/polycrystal sample model. - Mirror with heatbump. 2. Synchrotron source models / interfaces to other source codes. 3. Generalized reflectivity library. [1] J. Applied Crystallography, 2013. [2] http://www.mcxtrace.org [3] http://sci.esa.int/cosmic-vision/54517-athena/ [4] https://www.eucall.eu/
        Speaker: Dr Erik Knudsen (DTU Physics)
        Poster
      • 45
        Maximizing both user autonomy and usability in SpinWaveGenie
        Software applications make choices that balance offering simplicity for novice users with maintaining sufficient flexibility for advanced use cases. SpinWaveGenie, a library of reusable building blocks for performing linear spin wave calculations, derives its customizability from having users write small programs directly calling these classes. In this poster, I will discuss how we have lowered technical barriers that might discourage new users. SpinWaveGenie exports a CMake target, reducing the build script for user code to just a few lines. The recent addition of python bindings improves usability by further simplifying the edit, build, debug cycle. User autonomy is similarly enhanced by integrating SpinWaveGenie with a large ecosystem of open-source scientific software. This research used resources at the High Flux Isotope Reactor and Spallation Neutron Source, a DOE Office of Science User Facility operated by the Oak Ridge National Laboratory.
        Speaker: Dr Steven Hahn (Oak Ridge National Laboratory)
      • 46
        New developments in the McStas neutron Monte Carlo ray-tracing package
        The [McStas][1] neutron ray-tracing simulation package is a versatile tool for producing accurate simulations of neutron scattering instruments at reactors, short- and long-pulsed spallation sources such as the European Spallation Source. McStas It is extensively used for design and optimization of instruments, virtual experiments, data analysis and user training. McStas was founded as an scientific, open-source collaborative code in 1997. This contribution presents the project at its current state and gives an overview of lessons learned in areasof design process, development strategies, user contributions, quality assurance, documentation, interoperability and synergies with the McXtrace project. Further, main new developments in McStas 2.3 (April 2016) and McStas 3.0 (expected early 2017) are discussed, including many new components, updated source brilliance descriptions, new tools and user interfaces, web interfaces and a new interoperatbilty with MCNP and other high-energy oriented Monte Carlo codes via the [MCPL][2] format. *This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 654000.* [1]: http://www.mcstas.org [2]: http://mcpl.mccode.org [3]: http://www.mcstas.org/logo-left.png [4]: https://www.e-neutrons.org/wp-content/themes/e-neutrons2015/images/logo-cyan_small.png [5]: https://www.e-neutrons.org/wp-content/themes/e-neutrons2015/images/sine2020_final_small.jpg
        Speaker: Mr Peter Willendrup (DTU Physics)
        Slides
      • 47
        On-Axis-View: a GUI library to enhance the sample environment control.
        On-Axis-View (OAV) is a python library designed to develop applications to visualize, monitor and control the sample environment of a general experimental end-station, and is being used in several beamlines at ALBA Synchrotron. Many X-Ray experiments performed in a synchrotron facility (like Protein Crystallography, Non Crystalline Diffraction or Powder Diffraction experiments) require a precise control of the sample position and orientation (centering procedures), the proper positioning of the X-Ray beam and its morphological analysis as well as a friendly integration of all different optical and motorized instruments into the main control interface. Thus, the OAV library provides such functionalities, enhancing the development of customized Graphical User Interfaces according to the specific requirements from the final users and also providing a better user experience. In addition, the usage of the library greatly reduces the required time on developing new solutions and eases its maintenance. The OAV library is based on Taurus (Qt) and guiqwt, making the addition of available tools pretty straight forward and the implementation of the current control layer is based on Sardana which is part of our Control System at the ALBA Synchrotron.
        Speaker: Guifré Cuní (CELLS-ALBA)
        Paper
      • 48
        Prototype of real-time data analysis at the European Spallation Source
        Extracting scientific results from neutron experiments requires a challenging multiple-step treatment of the data, starting from the reduction and followed by the analysis. In order to simplify some of the repetitive procedures, automated and real time data reduction based on Mantid [1] has been tested at the Spallation Neutron Source and ISIS [2]. The second stage, the data analysis, can also be run “live” during the experiment in a simplified way to provide visual and numerical feedback on the quality of the acquired data. Thus, the real-time analysis enables the user to tune the running experiment either manually or automatically via feedback to the control system.
        The European Spallation Source ERIC (ESS) will generate a very intense long pulse to feed state-of-the-art instruments. In order to fully harness the potential of this powerful neutron source and ensure that a user can leave the facility with useful results, the ESS aims to give user the option to analyze data live during a running experiment for some of its high-throughput instruments.
        Here we report on the proof-of-concept for automated analysis of powder diffraction data and also small angle neutron scattering data, based on Fullprof [3] and SasView [4], respectively. These results are parts of an automated data processing workflow, i.e. collecting, reducing, analyzing, developed at the Data Management and Software Centre of the ESS [5].
        [1] O. Arnold, et al., Nuclear Instruments and Methods in Physics Research Section A, 2014, 764, 156
        [2] Shipman, G. et al., Accelerating data acquisition, reduction and analysis at the Spallation Neutron Source, e-Science (e-Science), 2014 IEEE 10th International Conference on 2014, 1, 223
        [3] J. Rodriguez-Carvajal, Physica B, 1993, 192, 55
        [4] SasView, http://www.sasview.org/
        [5] https://europeanspallationsource.se/data-management-and-software
        Speaker: Céline Durniak (European Spallation Source ERIC)
      • 49
        Python useful features for programming experiment control systems
        Python programming language/1/ is popular, in particular for the programming and control systems as a basic programming language /2,3/ and as a language for programming user interfaces (GUI)/4/. In the programming complex Sonix+/5/. Python successfully used for both of these. The presentation will be devoted to some nice language features useful for programming instrument control software. References 1. [Python] http://www.python.org 2. [Python] http://www.esrf.fr/computing/bliss/ 3. [Python] http://www.frm2.tu-muenchen.de/software/nicos/index_en.html 4. [SARDANA] http://www.tango-controls.org/Documents/tools/Sardan 5. [Sonix +] http://sonix.jinr.ru/wiki/doku.php?id=en:index
        Speaker: Dr Andrey Kirilov (FLNP of JINR)
        Poster
      • 50
        Recent Progress in the Development of MLF EXP-DB in J-PARC/MLF
        The MLF Experimental Database (MLF EXP-DB) is a core system aimed at delivering advanced services for data management and data access in J-PARC/MLF [1]. The main role of this system is to safely and efficiently manage a huge amount of data created in neutron instruments, and provide an effective data access for facility users promoting the creation of scientific results. The system is a web-based integrated database system based on a three-tiered architecture model. Collecting experimental data and associated information such as proposal, principal investigator and samples from instruments and other business database systems, it creates experimental data catalog, and provides facility staff and users with web portals for referring the data catalog. Recently we have redesigned the MLF EXP-DB for enhancing the availability and scalability toward full-scale operation of the system and facility in the future. In addition, we have made additional development of the web portal for improving the usability. **High availability** In order to enhance the availability, we have made the system to a redundant distributed system consisting of a couple of servers in a switch over relationship with database replication. It allows to avoid a service outage owing to system failure and maintenance such as application of a security patch as much as possible. **Scaling-out** We have scaled out the system improving scalability especially for the load balancing of data cataloging which depends on the data production rate in the instruments. The data production rate is increasing with the enhancement of beam intensity based on the development plan for accelerators and neutron sources. Since the maximum data production rate in a whole facility at the full performance of beam intensity is expected to reach 5TB/day, so it is critical issue for the sustainability of service provision. **Web portal** We have developed a unified web interface for the redundant distributed system and improved some features such as data retrieval on the web portal. It is possible to search data through the distributed systems and with flexible search conditions according to various measurement conditions. In this contributions, we will report the detail architecture and the result of performance evaluation for the recent development, and present operating status of the MLF EXP-DB. **References** 1. K. Moriyama and T. Nakatani, “A Data Management Infrastructure for Neutron Scattering Experiments in J-PARC/MLF”, Proceedings of ICALEPCS2015, Melbourne, Australia, in press.
        Speaker: Mr Kentaro Moriyama (Comprehensive Research Organization for Science and Society)
        Paper
      • 51
        RSMap3D: Reciprocal-Space Mapping Software
        The APS continues to develop the *RSMap3D* software package, a general-purpose tool for reciprocal-space mapping. The tool allows users to examine the volume of collected data and select portions on which to apply transformations that convert detector pixel locations from diffractometer geometry to reciprocal-space units, and then map pixel data on to a 3D reciprocal-space grid. *RSMap3D* can map data acquired using 4- and 6-circle diffractometers, and with scans taken over angles or energy. The application presents a graphical interface for selecting the relevant parts of data to process via a 3D representation of the acquired data volume. Scan angle or energy data is usually read from data files generated by *spec*, while image data is often read from TIFF or HDF5 files. The core mapping routines utilize the *xrayutilities* package, which uses the OpenMP programming API to parallelize operations across multiple cores on a workstation for increased performance. Data too big to fit entirely into memory at one time is processed in smaller chunks and reassembled to form the final output volume, allowing users to process arbitrarily large input datasets. Once data is processed it may be used as input to further analysis workflows. Additionally, visualization is often an important part of the data analysis process. Data generated by *RSMap3D* is easily read by *ParaView*, an open source, high-performance tool for 3D data visualization and manipulation. *ParaView* allows the user to easily produce 3D contour plots, and make slices through the data using plane cuts or cuts on the surface of a defined sphere, for constant |qx, qy, qz| cuts, for example. Using *RSMap3D* in combination with sophisticated visualization tools enables APS staff and users to study large diffraction data quickly and effectively. *RSMap3D* is written in Python and relies heavily on the *xrayutilities*, *spec2nexus*, and *VTK* libraries. It is easily installed using the *pip* package management system, and runs on the Linux, OS X, and Windows platforms. *RSMap3D* is currently in regular use at the APS for time-resolved diffraction work at beamline 7-ID, for WA-XPCS analysis at beamline 8-ID, for data exploration with inelastic X-ray measurements at beamline 30-ID, for scattering and diffraction experiments at beamlines 33-BM and 33-ID, and for micro-diffraction analysis at beamline 34-ID. *Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357.
        Speaker: Mr Nicholas Schwarz (Argonne National Laboratory)
      • 52
        Sample positioning on a diffraction beamline using artificial neural networks
        Time efficient and accurate sample positioning with reference to the neutron gauge volume are important parameters on diffraction based strain scanner instruments. Traditionally sample alignment to the neutron gauge volume is performed by beam entry or wall scans and fitting of the diffracted neutron intensity values (as a function of relative position) to an analytical solution. Using this approach, the sample surface position can be determined to very high precision [1]. In general, dependent on the scattering power from the sample, entry scans may become tedious and time-consuming inherently reducing the effective utilisation of the available beam time. Notwithstanding this approach working well for samples that have simple geometrical shapes, it becomes increasingly more difficult to apply in samples that have irregular shapes since this requires multiple entry scans. James et al. devised an advanced sample alignment approach where a computer model of the sample is associated with the real sample in 3D space using fiducial markers and a coordinate measuring machine (CMM) [2]. This presentation will describe an alternative low cost approach to sample alignment that uses pre-characterized multi-material fiducial markers and artificial neural network technology eliminating the need for an expensive CMM. **References** [1] Brand, P.C. (1994). New Methods for the Alignment of Instrumentation for Residual-Stress Measurements by means of Neutron Diffraction, J. Appl. Cryst. 27, 164-176. [2] James, J.A., Santisteban, J.R., Edwards, L., Daymond, M.R. (2004). A virtual laboratory for neutron and synchrotron strain scanning. Physica B. 350, e743-e746
        Speaker: Mr Deon Marais (Necsa SOC Limited)
      • 53
        SANS Data Reduction Redesign
        The Mantid Project’s software framework provides general support for visualization and data reduction of neutron scattering and muon spin measurements [1]. It allows the users to implement their own custom analysis algorithms and reduction routines. For several scientific areas, such as Small Angle Neutron Scattering (SANS), simple and efficient custom interfaces with tailored data reduction frameworks have been provided to allow users to analyse their data. The initial version of the reduction interface for SANS instruments at ISIS facilities was created nine years ago and was the first of the custom technique-specific interfaces. It has provided a successful solution and has been in active use ever since. However, increased data volumes and demanding feature upgrades have revealed the limitations of the current approach. Instrument scientists have asked us to deliver a new data reduction framework which is more scalable, robust, and solves the performance issues of the current approach. In addition, they require a solution which allows other facilities to incorporate their custom data reduction easily into the same framework and interface. We have proposed a novel solution for the ISIS SANS reduction interface which makes use of a modular and general approach based on Mantid’s work-flow algorithms, coupled with a Model-View-Presenter-based user interface. This approach allows other facilities to reuse and integrate easily into the existing infrastructure and automated user interface level testing, therefore reducing future development and maintenance cost and effort. We will contrast the limitations of the current approach with the major improvements that our novel approach will deliver to the SANS instrument scientists and report on the progress that has been achieved so far. **References:** [1] www.mantidroject.org
        Speaker: Dr Anton Piccardo-Selg (Tessella, ISIS)
      • 54
        Simulating ideal mosaic and/or deformed single-crystal in arbitrary geometry in Geant4
        NCrystal is a Monte Carlo software package that we developed for slow neutron interactions with single- and poly-crystals. Together with the inelastic scattering models in NCrystal, the Bragg diffraction model provides detailed microscopic description of the neutron transport in crystals. We only show the single-crystal capabilities in our newly developed NCrystal Geant4 extension in this presentation. Bragg diffraction in crystals directly demonstrates the wave property of neutron. The physics of such processes in single-crystal can be described by the Darwin equations. So far, the exact solution of equations is only known in the case for a slab-like geometry. Taking advance of the powerful geometrical navigation capabilities of the open-source simulation toolkit Geant4, our detailed Bragg diffraction model can simulate neutron transport accurately in almost any geometry. We will compare our simulated results in a few cases with well established analytical models.
        Speaker: Dr Xiao Xiao Cai (ESS/DTU)
      • 55
        The Design of Distributed data processing at CSNS
        A high-extendable distributed data processing suit for neutron experiments (DroNE) is developed in C++, which will server for the online data manipulation and analysis of neutron scattering instruments. The loosely-coupled distributed architecture is used to meet multi-discipline research demands at China Spallation Neutron Source (CSNS). The software provides flexible Python/Java APIs, and allows a wide variety of interfaces to run the experiment. In the implementation, the online data streaming is received by DIM from Data Acquisition System (DAQ), and the real-time process variables are logged by EPICS from slow control system. Synchronously, a complete chain of reconstruction algorithms will be trigged to decompose the time-of-fight (TOF) neutron events. A communication layer is also developed and open to all distributed components. The physical histogram and meta-data related to certain run will be kept in-memory data structure store with time series durability. It will allow the direct access of the data reduction package, Mantid. In addition, the data visualization software is built in client/server model, to satisfy the quick response needs of online data manipulation.
        Speaker: Dr Rong Du (China Spallation Neutron Source, Institute of High Energy Physics)
      • 56
        The silx toolkit
        The European Synchrotron Radiation Facility (ESRF) has identified data analysis and management as a key priority and is going to devote 45 years of additional full-time-equivalent man power to that purpose in the 2016-2022 period. Among the expected deliverables, there is a set of tools to simplify development and maintenance of scientific applications. The silx toolkit (www.silx.org) tries to achieve that goal. This contribution presents its current status. Well-known ESRF applications like FabIO, pyFAI nad PyMca are expected to base their graphical user interfaces on this library in order to reduce and to share maintenance efforts
        Speaker: Dr V. Armando SOLE (ESRF)
      • 57
        The unified software package Sonix+. Analysis of development experience
        The software package Sonix + (Sonix) / 1 / developed as a unified control software for neutron instgruments. It has been in operation since 1995. Currently it is used at instruments of the IBR-2 (FLNP, JINR), as well at some instruments at other centers of the Russian Federation (totaly about 20 installations). Though the main ideas of the Sonix + are largely similar to the decisions taken at other centers of the NOBUGS community, the consideration of specific needs and traditions prevailing at the FLNP, had substantial influence at the structure of the complex, and some implementation decisions. This applies, for example, the choice for the operating system of instrument control computer, implementation details of so-called "database" - parameter storage with fast access for intermodule communication, a graphical user interface, choice of the spectra recording format, etc. During the long period of exploitation the complex was developed, it has been added new components and has been changed to the new requirements. This led, in particular, to the adjustment of a number of initial decisions, and to a certain eclecticism. Some implemented features have unclaimed, and others, on the contrary, have required further development. This presentation is devoted to a detailed review of all these issues. /1/ http://sonix.jinr.ru/wiki/doku.php?id=en:index
        Speaker: Dr Andrey Kirilov (FLNP of JINR)
        Poster
        Proceedings
      • 58
        The WebSonix service for remote instrument monitoring: current state and future plans
        The presentation is devoted to the WebSonix service/1/ which is used for remote instrument monitoring at the IBR-2 reactor at FLNP together with instrument control software Sonix+/2/ since 2013. Currently, the service has been successfully used at 8 spectrometers. However, during the operation several problems have been revealed that suggest to make some correction. These correction cocerned some structural changes, improving security, fault tolerance, etc. Possible methods for solving the identified problems are discussed. References /1/[WebSonix] Публикации (Доклады на XXIV International Symposium on Nuclear Electronics & Computing (Varna, 9-16 September, 2013) Morkovnikov I.A., Kirilov A.S. Upgrading WebSonix — remote instrument control system experiment on the IBR-2 reactor /2/[Sonix+] http://sonix.jinr.ru/wiki/doku.php?id=en:index
        Speaker: Mr Ivan Morkovnikov (JINR)
        Poster
      • 59
        Update on neutron imaging functionality in Mantid
        Interest in neutron imaging in general and energy resolved neutron imaging in particular has been growing in recent years. Several imaging instruments are currently in different stages of planning, construction, commissioning and operation at pulsed neutron sources around the world, such as IMAT at ISIS (UK), ODIN at ESS (Nordic countries), RADEN at J-PARC (Japan), and VENUS at SNS (USA). IMAT (Imaging and Materials Science & Engineering) is undergoing commissioning in 2016 and provides neutron radiography (2D), neutron tomography (3D), energy resolved (fourth dimension) and energy-dispersive neutron imaging. IMAT offers unique time-of-flight diffraction techniques by capitalising on latest image reconstruction procedures and event mode data collection schemes. These features impose several software requirements for data reduction and analysis that differ substantially from other neutron techniques. The [Mantid software project](http://www.mantidproject.org) provides an extensible framework that supports high-performance computing for data reduction, manipulation, analysis and visualisation of scientific data. It is primarily used for neutron and muon data at several facilities worldwide. Mantid includes several so-called custom interfaces specialized for different scientific areas. We give an update on recent developments in the Mantid software to provide better support for neutron imaging data and to support the commissioning of IMAT at ISIS. This involves new data structures, data processing components or algorithms, and user interfaces to satisfy the higher demands of energy-depending neutron imaging in terms of data volume and complexity of analysis. A custom graphical user interface for imaging data and tomographic reconstruction is available in recent releases of Mantid. It integrates capabilities for pre- and post-processing, reconstruction, and 2D and 3D visualisation. This functionality can be used for instruments specifically designed for imaging experiments as well as other instruments using imaging detectors. Diverse tools for tomographic data reconstruction and analysis are being developed by different research groups and synchrotron and pulsed-source facilities. The imaging graphical interface of Mantid offers a common, harmonised interface to use an array of tools and methods that are currently being trialed, including for example third party tools for tomographic reconstruction such as [TomoPy](https://github.com/tomopy/tomopy), the [Astra Toolbox](https://github.com/astra-toolbox/astra-toolbox), [Savu](https://github.com/DiamondLightSource/Savu/), and [MuhRec](https://www.psi.ch/niag/muhrec).
        Speaker: Dr Nicholas Draper (Tessella)
      • 60
        Upgrading RITA2 instrument with 0MQ streaming
        The RITA2 instrument is a triple axis spectrometer installed at the Swiss Spallation Neutron Source SINQ. Its electronics dates back to the 80’s and required a renovation. As a part of the upgrade program RITA2 has been equipped with 2nd generation electronics and data acquisition based on event streaming. In the framework of the BrightnESS project the DAQ software has been upgraded to make use of 0MQ as a transport layer and tested making use of event generators during SINQ shutdown. This procedure has proved to be a viable approach to develop solutions for the ESS.
        Speaker: Dr Michele Brambilla (Paul Scherrer Institut)
      • 61
        Using Behavior Driven Development Tools for System Testing
        Using Behavior Driven Development (BDD) tools has the advantage of both documenting the working of the tested system and providing for an executable test. This contribution shows how this works and how we applied BDD to retrofit automatic testing to an existing control system (SICS) which had been developed when no one talked about test driven development yet.
        Speaker: Dr Mark Koennecke (Paul Scherrer Institute)
      • 62
        Using Docker to Provide Consistent Environments in Development, Testing and Production
        A significant problem affecting software developers is the difference in operating system, runtime dependencies and external services that can occur between local development environments and the environments in which software is actually deployed. This has consequences in terms of the increased software complexity required to account for these differences, and/or the effort necessary to maintain infrastructure and configuration consistency across multiple machines. Many types of tools currently exist to help tackle this problem, including configuration managers, package managers, language-specific version managers, and various hardware virtualization methods. However, these are not without their drawbacks; they are often limited in scope, can be language or OS-specific, are sometimes overly complex, and may have a large impact on resources and performance. Docker is a relatively new tool with a different approach. It uses OS-level virtualization to provide application “containers” — complete filesystems that contain everything needed to run code. The containers are *isolated*, giving developers freedom to use whatever technologies they want and to pull in any dependencies they need without worrying about conflicts with other applications. The containers are *lightweight*, so they have a very low overhead and can be spun up and ready to use in fractions of a second. The containers are also *portable* — applications can be packaged with their configuration and can be guaranteed to work on other machines running Linux, MacOS or Windows. In this talk I will present a brief overview of Docker along with two current use cases for Docker at ORNL — the provisioning of ephemeral testing environments, and automating the deployment of a complex web application.
        Speaker: Mr Peter Parker (ORNL)
      • 63
        Utsusemi and software applications for the utilization of event-recording data at MLF, J-PARC
        We report the current status of software development about "Utsusemi" and its applications for the effective utilization of event-recording data on the Materials and Life Science Experimental Facility (MLF), the Japan Proton Accelerator Research Complex (J-PARC). "Utsusemi" is the one of analysis software frameworks utilized for the data reduction and visualization in MLF, including treatment of event-recorded data. MLF adopted the event-recording method on the data acquisition system (DAQ), which records the time and position information of neutron detections. In addition, the time and common electric signals can be recorded as events at same time by using of the TrigNET module in DAQ. This module connects with instrument devices and gathers their output signals to recode with same time stamp as neutron detection events completely. Treating these event data, Utsusemi can pull out individual data for each sample condition to analyze from one measurement data. This method has been already used for many MLF users. For example, we successfully carried out some users’ measurements and analyzed the data under the pulse-shaped magnetic field and alternating electric field. In addition, we also realized a "pseud-online" analysis using event-recorded data. In this presentation, we will show the current status of our works.
        Speaker: Dr Yasuhiro Inamura (Japan Atomic Energy Agency, J-PARC)
      • 64
        Virtual Instrument Redesign
        The separation of collected data, from the geometry of the instrument setup has proven to be a core strength in the Mantid Framework[1], a data reduction application used extensively at time-of-flight neutron sources around the world. The virtual instrument that Mantid supports allows for many geometric operations to be performed without any knowledge or assumptions on the actual instrument geometry, including distance and angle measurement, solid angle calculations, nearest neighbour networking and ray tracing. At a higher level these operations are essential for many of the important algorithms that Mantid offers, including unit conversion, calibration and many data corrections. While the concept has been very successful, continual advancement in the complexity of experiments, as well in increases flux and pixilation of modern beam lines, provides large challenges for future design. Recent profiling work reveals that the current virtual instrument, while very flexible, is far from optimal in many of the common situations in which we now use it. Since, The European Spallation Source have selected Mantid to be their platform of choice for data reduction, there is extra incentive to make the framework increasingly capable of handling tomorrows neutron scattering beamlines. The Mantid team is currently in the process ratifying a novel and fundamentally different design for the virtual instrument, which offers much improved performance without sacrificing flexibility. We explore the requirements that have led us towards this new solution, as well as providing a detailed look at the solution itself. Replacing such a core component of the framework with the new virtual instrument will represent a major engineering challenge; we will present our plans and progress on tackling this challenging upgrade task. References [1] www.mantidroject.org
        Speaker: Mr Owen Arnold (ISIS)
      • 65
        Virtual Research Management Plans at ELI Sites
        The management of a research institution is a complex task. The challenges are even higher when multiple independently developing infrastructures have to be merged into a common European research centre. This is what ELITRANS aims at: the goal of this Horizon 2020 project is to merge the three currently developing laser infrastructures, namely ELI-ALPS in Hungary, ELI-Beamlines in the Czech Republic and ELI-NP in Romania into ELI-ERIC, an pan-European research institution. The three main objectives of the project are to design the organisational structure of ELI-ERIC, set up the management and operational processes and support the transition from the current development phase to the new structure. One important subset of these tasks is the design of the virtual research management environment (VRME), that is to define the management and administration structures, identify the hardware and software needs as well as further elements that are necessary to implement the research environment across the three pillars. The poster gives an insight into the conception of the planned research management environment, demonstrates how this model will support the steps of the research lifecycle and presents a roadmap for the implementation.
        Speakers: Lajos Schrettner (ELI-ALPS), Tamás Gaizer (ELI-ALPS)
      • 66
        VirtuES - Combining Computation with High Throughput Experiments
        The vibrational spectrometer (VISION) at the Spallation Neutron Source (SNS) of Oak Ridge National Laboratory (ORNL) is the world's first high-throughput inelastic neutron scattering instrument. With it's high throughput, VISION is facing the challenge of timely data analysis and interpretation. The aim of VirtuES (Virtual Experiments in Spectroscopy) is to effectively meet this challenge, by providing a dedicated computational cluster. We are leveraging ORNL's existing expertise in computational sciences by situating VirtuES within the CADES (Compute and Data Environment for Science) infrastructure. We use first principle (ab-initio) electronic structure methods to predict the dynamics of materials. When experimental results are used in conjunction with modeling data, the overall information content is more than the sum of its parts. We will present details on the computational resources that are available to VISION users, together with benchmarks, initial results and our plans for the future.
        Speaker: Dr Stuart Campbell (Oak Ridge National Laboratory)
      • 67
        WebGL for MX - real and reciprocal space density in a web interface.
        Only 5 years after the first WebGL-based molecular graphics program, [GLmol](http://webglmol.osdn.jp/index-en.html), was created, this space is quite crowded now, with more than 10 open-source WebGL macromolecular viewers developed across the world. Despite of this, we felt that none of them can present electron density well enough to replace desktop programs, so to fill this gap we started to work on a viewer focused on the needs of crystallographers. We will discuss the design of our viewer [UglyMol](https://uglymol.github.io), as well as trade-offs between the map accuracy and the download size, and between the quality and the efficiency of rendering. UglyMol was originally designed to present models and electron density from our ligand screening pipeline [Dimple](http://ccp4.github.io/dimple/). It was then re-purposed to show also reciprocal space density as reconstructed from diffraction images.
        Speaker: Mr Marcin Wojdyr (CCP4 & Diamond Light Source)
    • Keynote Tuesday Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 68
        Seven Secrets of Maintainable Codebases
        In this keynote you'll learn novel techniques that help you make sense of large codebases. You'll learn to identify the code that really matters for your ability to maintain a codebase, how to prioritize improvements and even evaluate your architecture based on how you actually work with the code. We'll also cover the people side of programming as you learn to mine social information such as communication paths, developer knowledge and hotspots. All techniques are based on software evolution. They use data from the most underused informational source that we have in our industry: our version-control system. Each point is illustrated with a case study from a well-known codebase like Roslyn, ASP.NET MVC, Scala or Clojure. This is a new perspective on software development that will change how you work with large systems. Come join the hunt for better code! Adam is a programmer that combines degrees in engineering and psychology. He’s the founder of Empear where he designs tools for software analysis. He's also the author of Your Code as a Crime Scene, has written the popular Lisp for the Web and self-published a book on Patterns in C. His other interests include modern history, music and martial arts.
        Speaker: Mr Adam Tornhill (Empear AB)
      • 69
        Publishing is not enough
        Scientific publications are and will remain the primary source to measure scientific impact. However, the simple metric of publications, citations and journals impact is largely ignoring societal impact and becomes increasingly inappropriate for facilities, application developers or primary scientific data [1]. Principles of Open Access are more and more extended towards scientific data, requested by funding agencies, scientific communities and publishers likewise. This demands not only to establish data managements plans prior to scientific experiments, grant applications or publications, but also the preparation of scientific data in a form permitting data publication, sharing, re-use and mining. The Research Data Alliance (RDA) [2] establishes the fundamentals of global, inter-operable scientific data sharing. The Photon and Neutron research communities are taking part in this endeavor [3,4] to ensure that the rather special demands from these scientific communities are taken into account when shaping global data services. This talk will present the essentials researchers need to know about RDAs activities and the impact of the open access movement on research at the Photon and Neutron facilities. [1] The Metric Tide: http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide/Title,104463,en.html [2] Research Data Alliance (RDA): https://rd-alliance.org [3] The Interest group of the Photon and Neutron Science communities (PaNSIG): https://rd-alliance.org/groups/research-data-needs-photon-and-neutron-science-community.html [4] A.Boehlein et al., The Research Data Alliance Photon and Neutron Science Interest Group, Synchrotron Radiation News 28(2) 2015, DOI:10.1080/08940886.2015.1013421
        Speaker: Dr Thomas Proffen (Oak Ridge National Laboratory)
      • 70
        The ILL Joins the Mantid Project
        At the ILL LAMP (Large Array Manipulation Program) [1] has been the primary package responsible for data reduction for over 20 years. LAMP is based on IDL, so works across multiple platforms, and provides a graphical user interface and scripting capabilities for data treatment. It provides support for most of the instruments at ILL, having the ability to handle data produced at the ILL since its creation. In order to facilitate ease of use and the standardisation of neutron data reduction software across many facilities worldwide, a decision has been made to phase out LAMP and bring in Mantid [2] as the main tool for the data treatment. The Mantid Framework is a cross-platform and easy-to-extend package providing both GUI and python scripting capabilities for complex data manipulation. Mantid has been in use for some time at ISIS and the SNS, and the ESS will also use it for all data reduction. A number of advantages are foreseen in adopting Mantid, such as well established distributed development practices, and the presence of much of the data reduction functionality that exists in LAMP. In addition, it will also provide a more consistent user experience for users undertaking experiments at different facilities. Within the Mantid Project, there are many example of collaborative sub-projects, allowing several facilities to contribute to, and benefit from combined effort. The current Mantid adoption project started in May 2016 at the ILL, and builds on previous work undertaken during a project to explore using Mantid in principle. The 3-year long project involves a team of 3 new developers, plus a developer for one year with existing Mantid experience, under the Computing for Science Group at the ILL. The objective is to provide a smooth transition from LAMP to Mantid by providing Mantid support for most of the instruments at the ILL by the end of the project. In this talk we will present the progress made thus far in using Mantid on Time-of-Flight spectroscopy (IN4, IN5 and IN6) and Backscattering (IN16B) instruments at the ILL. We will discuss some of the differences in approach between LAMP and Mantid and the challenges faced along the way. We will also share the future plans for the project to address the other technique areas used at the ILL, and to support instruments with moving detectors at the ILL. [1] https://www.ill.eu/instruments-support/computing-for-science/cs-software/all-software/lamp/the-lamp-book/ [2] http://www.mantidproject.org/
        Speaker: Dr Ian Bush (Tessella / ILL)
        Slides
    • 10:10 AM
      Coffee Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Contributions 4 Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 71
        MXCuBE 3 web application, on the way to next generation experiment control
        The MXCuBE project started in 2005 at ESRF [1], with the objective of providing crystallography beamlines’ users an easy-to-use software platform to run their experiments. The current MXCuBE is based on PyQt, however the usage of old PyQt libraries, as well as the lack of a clear separation between the application control logic and the graphical layer makes it very difficult to maintain, upgrade and extend the current version. This, together with the need of extended capabilities for the operation of the new crystallography beamlines (such as improved remote operation, multiple user handling, cross platform, etc.) pushed the members of the MXCuBE collaboration into the development of a new software based on modern technologies, MXCuBE v3. Taking into account the requirements, the path that best suited the needs is a web-based development. All the remote operation and cross platform needs fits in a natural way in a web environment. Thanks to the single page application paradigm the final user experience can be improved, and not be only a desktop application clone. Also, decoupling the application between client and server improves the code quality, since each element has a clear scope and makes it easier to maintenance, compared to the previous version of MXCuBE. Taking advantage of this new technology the user interface has been completely redesigned, with the aim of enhancing the user experience. The main technologies in use are python-flask web framework [2] for the backend, and React JavaScript library [3] for the frontend, enhanced with several third party libraries for both components. Low-level control is achieved via connection to Tango, Sardana and custom protocols, by means of the so-called “Hardware objects”, which are self-contained pieces of software which links to the underlying instrumentation control. These libraries are being reused from the previous version of MXCuBE. Currently, MXCuBE v3 is under an active development, mainly by MAXIV and ESRF, and has already achieved one of the first milestones defined in 2015, when the first data collection experiment was successfully performed in June 2016 during Biomax commissioning, Fig [fig:MXCuBE-v3-screenshot]. The next months are devoted to increase the stability of the application and to add new features that will be needed when the user operation starts. MXCuBE v3 will not only server as the experiment control environment at Biomax [4] beamline in MAXIV and other facilities in the near future, but also will serve as technological breakthrough for future developments for the whole facility. References: [1] Gabadinho J. et al. MxCuBE: a synchrotron beamline control environment customized for macromolecular crystallography experiments. J Synchrotron Radiat. 2010. [2] Python Flask microwebframework, http://flask.pocoo.org [3] React: a Javascript library for building user interfaces. https://facebook.github.io/react [4] Thunnissen M. et al. The macromolecular crystallography beamlines BioMAX and MicroMAX at the MAX IV laboratory.  Acta Crystallographica Section A: Foundations and Advances. 2015.
        Speaker: Dr Mikel Eguiraun (Maxlab)
      • 72
        SwissFEL Beam Synchronous Data Acquisition - A Sneak peek under the hood
        A new Data Acquisition system is being developed and commissioned for the upcoming FEL at PSI. This system is based on several novel concepts and technologies, and it targets at immediate data availability and online processing. The system is capable of assembling an overall data view of the whole machine thanks to the distributed and scalable buffering back-end. Load on data sources is reduced by immediately streaming data as soon as it becomes available. The streaming technology used provides load balancing and fail-over by design. Data channels from various sources can be efficiently aggregated and combined into new data streams for immediate online monitoring, data analysis and processing. The system is dynamically configurable, various acquisition frequencies can be enabled and data can be kept for a defined time window. All data will be available and accessible enabling advanced pattern detection and correlation during acquisition time. Accessing the data in a code-agnostic way will also be possible through the same REST API that is used by the web-frontend. Furthermore, data can be automatically reduced, compressed and extracted for later studies and documentation.
        Speaker: Mr Simon Gregor Ebner (Paul Scherrer Institute)
        Slides
      • 73
        Integration of fast detectors into beamline controls at the GM/CA macromolecular crystallography beamlines at the Advanced Photon Source
        Fast detectors revolutionize many operations at macromolecular crystallography beamlines, from introducing shutterless data collection and on-the-fly rastering scans to new standards for beamline stability, detector-goniometer synchronization at 100Hz data collection, and new ways to display data (e.g. plotting spots count in addition to visual inspection of individual frames). Fast crystal screening and data collection with such detectors also create new requirements for the computing environment, including needs for fast automatic distributed data processing and analysis pipelines on clusters, high-speed storage, transferring of large data volumes to remote institutions, and software to handle new data formats such as HDF5. The General Medical Sciences and Cancer Institutes Structural Biology Facility at the Advanced Photon Source (GM/CA @ APS) has operated the Pilatus3 6M detector at the 23ID-D beamline for 2.5 years and just started operating the Eiger 16M at 23ID-B. In this presentation we report how we incorporated controls for these detectors into our EPICS-based JBluIce control system, how we collect, display and automatically process data on computing clusters at speeds up to 100 frames per second, how we re-arranged our network, distributed storage and computing clusters to accommodate greater data speeds and volumes, how we are optimizing our ScienceDMZ network to speed up remote data backups, and how we implement automatic beamline alignment and remote access for more efficient use of beamlines equipped with these new detectors.
        Speaker: Dr Sergey Stepanov (Advanced Photon Source, Argonne National Laboratory, Argonne, IL, USA)
        Slides
      • 74
        IBEX - the new EPICS based Instrument Control System at the ISIS Pulsed Neutron and Muon Source
        Instrument control at ISIS is in the process of migrating from a mainly locally developed LabVIEW based system to an EPICS based one. The new control system, called IBEX, was initially used during commissioning of two new instruments, but is now being used on production systems. We will cover the architecture and design of the new control system, our choices of technologies, current status, and our plans for moving forward.
        Speaker: Dr Frederick Akeroyd (STFC)
        Slides
      • 75
        Programs and techniques based on ROOT package for acquisition and sorting of the list mode data of the neutron detectors
        A detector group of a neutron center has to solve daily a variety of non-typical tasks. These tasks may appear due to the configuration and commissioning of detectors, search for causes of faults, or with the development of new methodologies and the preparation of new types of experiments. That is why the requirements to the software from the detector group are specific and varied. The user oriented software does not cover full range of these demands. The wish list for software from the detector group in FLNP includes detailed diagnostic tools, autonomy, fast deployment, flexibility, and, additionally, compact data formats, easy integration into the experiment control software (detachable or connecting components), extensibility with respect to new equipment. These requirements are the same for different types of detectors. Partially these demands have been realized in previous software like DeLiDAQ-1 [1] developed in collaboration with HZB, Berlin (former HMI). Recently we have started to follow such requirements in our new developments. Especially important is the first item - to get full information of measured parameters. The most complete information from the detector and about operation of the readout circuit can be extracted by list mode data, and it corresponds to the modern trend of user oriented experimental techniques. The most comprehensive software package for event list data is ROOT package from CERN due to its concept of the ROOT trees with excellent implementation. Another reason to use this package is its compact data format. And as a bonus you get a huge list of benefits (because ROOT is an intensively used toolkit with complete suite of C ++ classes for fitting, linear algebra, visualization of large datasets, graphical user interfaces, etc.). In the report I will explain the solutions for the list mode data in the software implemented by me, some of which are already working for the detector specialists while others are under development. Software discussed in the report is designed for position-sensitive detectors with delay line readout and for the recorded data from the gaseous proton recoil telescope for fast neutron spectrometry [2]. [1] Nucl. Instr. and Meth. A 569 (2006) 900–904; See also Nucl. Instr. and Meth. A 572 (2007) 1004 [2] Physics of Particles and Nuclei Letters, 2012, Vol. 9, No. 6–7, pp. 508–516
        Speaker: Dr Elena Litvinenko (JINR)
        Proceedings
        Slides
    • 12:20 PM
      Lunch Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Contributions 5 Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 76
        Data Analysis & Management System for China Spallation Neutron Source
        The data analysis & management system at China Spallation Neutron Source (CSNS) adopts the sophisticated designs and the existing resources from various big-science facilities. The software for each beamline will be developed and deployed in the unified framework. Both facility and users can benefit from it. Under the strategy, a distributed data processing framework (DroNE) and message communication framework (NEON) for online and off-line data analysis are implemented. The interfaces to data acquisition (DAQ), experiment control (EPICS), data reduction (Mantid), and data visualization are plugged into the low-level framework flexibly. The design of data management system pays more attention to the data sharing between facilities and supercomputing. A high speed and highly-reliable data delivery system is developed. And cloud analysis system is built on the basis of OpenStack.
        Speaker: Dr JUNRONG ZHANG (Institute of High Energy Physics, Chinese Academy of Sciences)
      • 77
        Web Data Analysis at the Spallation Neutron Source
        Neutron scattering is one of the most effective ways to obtain information on both, the structure and the dynamics of condensed matter. The Spallation Neutron Source (SNS) is an accelerator-based neutron source at the Oak Ridge National Laboratory (TN, USA). This facility provides the most intense pulsed neutron beams in the world for scientific research and industrial development. State-of-the-art experiment stations provide a variety of capabilities for researchers across a broad range of disciplines, such as physics, chemistry, materials science, and biology. With great power comes… big data. The SNS neutron detectors produce raw data sets of considerable size. In order to find meaningful information, those data need to be fully treated, i.e., reduced and analyzed. With the increase of alternative browsing devices and the quasi permanently-available Internet connection, the paradigm of data treatment is changing. Data treatment is not seen as a sequence of independent procedures (E.g. Data collection, Reduction and Analysis) but as a unique integrated workflow connecting all the parties involved and enabling smooth flow of information between scientific applications and users. The SNS is leveraging Neutron Sciences by providing a better, and unique, user experience to the hundreds of users that annually visit their facilities. A new Web Portal using Responsive Web Design provides an optimal viewing and interaction experience across a wide range of devices. Coupling Boostrap [1] with JavaScript, we present a clean and responsive web interface that is simple and elegant to navigate, high-performing and easy to maintain and contribute. The system uses multiple freely available plugins and libraries for the presentation layer (DataTables [2], Handsontable [3], etc..) and makes use of Django Framework [4] for the backend. The experimental data is retrieved through the data catalogue system ICAT [5]. The scientific code routines are part of the Mantid Framework [6] and run separately on a high performance cluster. We present the project's general architecture and the latest development on the data analysis and visualization layer for Small Angle Neutron Scattering technique. This layer presents the reduction results to the user and allows data plotting and fitting. We are using Plotly [7] python offline module for visualization and MathJS [8] for curve fitting. This solution is cross-browser and cross-platform and provides a unique user experience. References: [1] http://getbootstrap.com/ [2] http://www.datatables.net/ [3] http://handsontable.com/ [4] http://www.djangoproject.com/ [5] http://pan-data.eu/ICAT [6] http://www.mantidproject.org/ [7] https://plot.ly/python/ [8] http://mathjs.org/
        Speaker: Dr Ricardo Ferraz Leal (SNS)
        Slides
      • 78
        Advances in High-Performance Data Analysis & Data Management at the APS
        High-performance data analysis software and computing infrastructure are of particular importance for synchrotron facility productivity. Demands for increased computing at user facilities is driven by new scientific opportunities enabled by new measurement techniques, technological advances in detectors, multi-modal data utilization, and advances in data analysis algorithms. The proposed high-energy, MBA-based storage-ring upgrade to the APS will increase brightness and coherence, leading to further increases in data rates and experiment complexity, creating yet further demands for the application of advanced scientific computational techniques. The APS and its parent institution, Argonne National Laboratory (ANL), are well poised to address these computational challenges. ANL is home to world-leading supercomputing infrastructure and computer science expertise. This colocation provides an unprecedented opportunity for collaboration. The APS has been bringing together expertise in high-performance data analysis and large data management, and leveraging computational and storage resources in order to address the facility’s current and upcoming challenges. At present, the APS collects approximately 2 PB of raw experimental data per year. This rate is quickly increasing due to the aforementioned reasons. To address the need for larger capacity storage in the near-term, the APS has implemented storage solutions in cooperation with the Argonne Leadership Computing Facility. The Petrel data system is a 1.7 PB data store that has been serving the needs of the APS for the past year. As of the middle of 2016, the APS has brought another storage system, Extrepid, online making an additional 1.7 PB of storage available for APS experiments. Each system is housed in a separate computing building on campus, and connected to the APS via individual dedicated 2 x 10 Gbps network links. Due to recent intra-campus network infrastructure upgrades, network bandwidth between the APS and these storage systems may be increased as needed. To best use these storage systems, software engineers and beamline staff at the APS continue to work closely with the Globus Services team to implement and deploy data management tools that integrate with beamline data workflows. These tools help automate the transfer of data between acquisition devices, computing resources, and data storage systems. Ownership and access permissions are maintained based on an experiment’s user group. A metadata catalog allows beamline staff to populate experiment conditions and information for access via a web portal. User groups can download data at their home institutions using Globus Online. These data management tools are now deployed at ten APS beamlines. More powerful computational resources and newly developed and applied high-performance computing software is being utilized in order to deliver analyzed data within shorter time frames. For example, staff and users at the APS are regularly utilizing a 128-node cluster equipped with 256 K80 GPUs for fast, parallel 3D volume rendering of large tomographic datasets during beam time. Scalable, virtualized computing clusters are tightly coupled with beamline workflows and are in regular use for most X-ray Photon Correlation Spectroscopy (XPCS) experiments at the facility’s dedicated XPCS beamline. Elemental mapping, reciprocal-space mapping, and Bragg coherent diffraction imaging, software is being ported to run on multi-threaded and distributed-memory computing resources. *Work supported by U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357.
        Speaker: Mr Nicholas Schwarz (Argonne National Laboratory)
      • 79
        Data Analysis Infrastructure for Diamond Light Source Macromolecular & Chemical Crystallography
        A proposal for the future data analysis infrastructure at Diamond Light Source is presented. Built on a messaging framework a variable number of distributed servers working in parallel replaces monolithic batch jobs running on a single node. This infrastructure is scalable, can be easily extended, and even allows moving heavily CPU-bound tasks, such as the processing of reduced macromolecular data, off-site, e.g. to external cloud providers. Diamond Light Source has 8 MX & CX beamlines with DECTRIS PILATUS detectors, each capable of producing diffraction data at rates between 25 and 100 images per second. Upgrades to new DECTRIS EIGER detectors are planned over the forthcoming year. These offer frame rates up to 133-3,000 Hz concomitant with increased image sizes for compressed data rates of around 18 Gbit/s. The current automated data analysis process consists of mainly two aspects: a very fast and embarrassingly parallel per-image-analysis for timely feedback during data collection, and more involved data reduction and processing designed to give answers to the experimental questions. The existing infrastructure depends on submitting batch jobs to a high performance computing cluster. While appropriate for the current workload this approach alone does not scale to the very high data rates anticipated in the near future. In particular with live processing there are shortcomings in performance when the workload exceeds the capacity of one cluster node. When data rates stay significantly below a node's capacity the cluster is currently not used efficiently. In the proposed infrastructure fine-grained tasks are submitted as messages to a central queue. Servers, running on cluster nodes, consume these messages and process the tasks. Results can be written to a common file system, sent to another queue for further downstream processing, sent to a dynamic number of subscribing observers, or any combination of these. This will increase the availability of high performance nodes to allow increased parallelisation of more computationally expensive tasks, thus increasing the overall efficiency of cluster usage. The resulting distributed infrastructure is resource-optimal, low-latency, fault-tolerant, and allows for highly dynamic data processing.
        Speaker: Dr Markus Gerstel (Diamond Light Source Ltd)
        Slides
      • 80
        Trust and Identity: an implementation of Moonshot and the vision of transparent interworking
        Diamond Light Source, the UK national synchrotron, is used by many thousands of researchers from the UK and many other countries each year. It works like a giant microscope, harnessing the power of electrons to produce bright light that can be used to study anything from fossils to jet engines to viruses and vaccines. Frequently the boundaries of the research extend across many other geographically distributed scientific facilities and resources and optimising the process of communication and sharing of results and data are pivotal to success. This talk will describe in more detail examples of our work and then go on to discuss the practical challenges to which this leads in terms of data volumes, acquisition, analysis and the computing resources needed. From the inception of Diamond we appreciated the value of providing a unique identity for each of our thousands of users for security and ease of collaboration together with the requirements to enable transparency and optimisation of processes. The remaining discussion will describe briefly the architecture in place and then continue with our design to extend this across multiple scientific facilities using common identity providers such as Umbrella and Orcid.
        Speaker: Dr Bill Pulford (Dimaond Light Source)
      • 81
        Recent Developments in the ICAT Job Portal
        The ICAT Job Portal (IJP) allows users to create and submit jobs to HPC resources or compute farms, taking as inputs datasets and datafiles selected using ICAT catalog searches, with jobs using the ICAT Data Service for data access. Jobs are specified via Job Types, where each Job Type defines: whether a job is interactive or can be run as a batch job; the dataset types for which a job may be run; whether a single job instance can be applied to multiple entities (datasets and/or datafiles); and other options and parameters to be passed to job instances. An IJP installation includes one or more *batch connectors*, which the IJP uses to submit jobs to batch processing systems. When the user selects multiple entities for a batch job, they are given the option of submitting multiple instances of the job (one per entity) to the batch system. The portal allows users to monitor the progress and status of ongoing and completed jobs. We describe the broad architecture of the IJP and cover recent developments, including work to integrate the IJP and TopCAT, and on design modifications to meet specific requirements of the Octopus facility at the UK Science & Technology Facility Council's Central Laser Facility.
        Speaker: Dr Brian Ritchie (RAL - STFC)
        Slides
    • 3:30 PM
      Coffee Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Contributions 6 Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 82
        DonkiOrchestra: a scalable system for data collection and experiment management based on ZeroMQ distributed messaging
        Synchrotron and Free Electron Laser beamlines consist of a complex network of devices. Such devices can be sensors, detectors, motors, but also computational resources. The setup is not static and is often upgraded. The data acquisition systems are constantly challenged by such continues changes and upgrades, so a constant evolution of software technologies is necessary. DonkiOrchestra is a TANGO based framework developed at Elettra Sincrotrone Trieste that takes full advantage of the ZeroMQ distributed messaging system and supports both data acquisition and experiment control. In the DonkiOrchestra approach, a TANGO device referred to as Director, provides the logical organization of the experiment as a sequential workflow relying on triggers. Each software trigger activates a set of Actors that can be hierarchically organized according to different priority levels. This allows for concurrency and map-reduce strategies. Data acquired by the Actors is tagged with the trigger number and sent back to the Director which stores it in suitably structured HDF5 archives. The intrinsic asynchronicity of ZeroMQ maximizes the opportunity of performing parallel operations and sensor readouts. This paper describes the software architecture behind DonkiOrchestra, which is fully configurable and scalable, so it can be reused on multiple endstations and facilities. Furthermore, experimental applications, performance results and future developments are presented and discussed.
        Speaker: Mr Roberto Borghes (Elettra Sincrotrone Trieste)
        Paper
        Slides
      • 83
        Multi Sample Workflow
        Gumtree is a software product developed at ANSTO and used for experimental control as well as data visualization and treatment. In order to simplify the interaction with instruments and optimize the available time for users, a user friendly multi sample workflow has been developed for Gumtree. Within this workflow users follow a step by step guide where they list available samples, setup instrument configurations and even specify sample environments. Users are then able to monitor the acquisition process in real-time and receive estimations about the completion time. In addition users can modify the previously entered information, even after the acquisitions have commenced. My presentation will focus on how ANSTO integrated a multi sample workflow into Gumtree, what approaches were taken to allow realistic time estimations, what programming patterns were used to separate the user interface from the execution of the acquisition, and how standardization across multiple instruments was achieved. Furthermore, this presentation will summarize the lessons learned during the development iterations, the feedback received from users and the future opportunities the approach enables.
        Speaker: Mr David Mannicke (ANSTO)
      • 84
        Generic Mapping Scans at Diamond Light Source
        Diamond Light Source is not alone in having multiple beamlines capable of conducting X-Ray mapping style experiments. These experiments involve moving a sample through the X-Ray beam, and collecting the result of the interaction on one or more detectors. As the mapping techniques themselves become more advanced, and the expectations of users increase with requirements for real time feedback and processing, it has become increasingly complex and inefficient to deal with these beamlines on an individual basis. In early 2015, Diamond Light Source undertook a cross beamline project with the aim of unifying the mapping experience of 5 beamlines at diamond, and ultimately to extend the final system to all beamlines which conduct mapping experiments. The initial 5 beamlines chosen were deliberately picked to be as varied as possible, from the techniques and detectors they use (STXM, XRF mapping, XRD Mapping, Ptychography and ARPES mapping) to the relative maturities of the beamlines (from one of the first beamlines commissioned at Diamond through to beamlines currently in construction) The mapping project is now 18 months in, and significant progress has been made on all elements of the project. Unification of hardware and software implementations on the 5 primary beamlines have been driven forward significantly, allowing common frameworks, such as GDA9, Malcom2, Epics Area Detector and HDF5 SWMR to be adopted and globally supported. Centralisation of these key frameworks has allowed for generic implementations to be created for continuous scanning of arbitrary trajectories as well as live visualisation and sophisticated live processing. The core deliverables of the project are now being rolled out to a selection of other beamlines who are able to easily adopt and maintain these common components, thus allowing Diamond to scale up individual beamlines feature set with minimal support implications.
        Speaker: Dr Mark Basham (Diamond Light Source)
      • 85
        Data acquisition and analysis software at the Swiss Light Source macromolecular crystallography beamlines
        Data acquisition software is an essential component of modern macromolecular crystallography (MX) beamlines, allowing for efficient use of beamtime at synchrotron facilities. Coupled with our highly distributed automatic data processing routines it allows assessment of data quality on the fly. Developed at the Paul Scherrer Institut, the DA+ data acquisition software is implemented at all three MX beamlines. DA+ consists of distributed services and components written in Python and Java, which communicate via messaging and streaming technologies. The major components of DA+ are the user interface, acquisition engine, hardware/detector and online processing. The DA+ provides a simple and intuitive GUI, which supports conventional, as well as advanced data acquisition protocols, such as multi-orientation SAD, energy interleaved MAD, raster scan and serial crystallography. Automatic data processing routines utilize freely available crystallographic data analysis programs and deliver near real time results for data collected with PILATUS (CBF format) and EIGER X (HDF5 format) detectors. Experiment metadata and processing results are stored in a MongoDB database for further inspection and data mining. The latter is accomplished with a Google Polymer based application, which allows users to monitor the ongoing experiment during beamtime. The software architecture enables exploration of the full potential of the latest instrumentations at the SLS MX beamlines. For example, grid scan frames are ZMQ-streamed from an EIGER X 16M Dectris detector at high frame rates and analysed on the fly before being persisted to GPFS storage.
        Speaker: Dr Justyna Aleksandra Wojdyla (Paul Scherrer Institut)
        Slides
    • 7:30 PM
      Conference Dinner Mazzolis Caffe and Trattoria, Tivoli Gardens

      Mazzolis Caffe and Trattoria, Tivoli Gardens

    • Keynote Wednesday Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 86
        The Small Potato Collider (or how to solve a multidisciplinary problem using a modular camera)
        Current Imaging sensors have amazing capabilities: Big resolutions, High Frame Rate, Hyperspectral Vision.... This is revolutionizing the industry with new applications such as 3D inspection or real time Chemical content detection. It is also changing the way we process images, a 12 Megapixel sensor produces as much as 4.8 GiB/s, or 16 times the data generated by the "interesting events" of the LHC. In this presentation we will discuss the hardware and software used on a novel Industrial Smart camera that is used today for grading 25 tons of potatoes per hour, scanning products for chemical/biological content or even counting screws. This Computer Vision System is built around an heterogeneous architecture (FPGA, GPU and CPU) that can be easily programmed thanks to its Open Source stack. Ricardo Ribalda is the Lead Firmware Engineer at Qtechnology, where he is helping to develop the next generation of Industrial Cameras. He is a strong open source advocate with contributions to the Linux Kernel, U-boot and the Yocto Project among other projects. Ricardo holds a PhD in Computer Science and Telecommunications from the UAM University of Madrid, Spain.
        Speaker: Dr Ricardo Ribalda Delgado (Qtechnology A/S)
        Slides
      • 87
        Social sourdough - Twitter as an experiment control user interface
        Over the last years, powerful mobile computing devices with internet access such as smart phones and tablets have become ubiquitous. In parallel, new software and services have emerged that provide a multitude of possibilities to exchange one-to-one, one-to-many and many-to-one information. This combination of hardware and software forms an enormous infrastructure that is capable of transferring billions of messages every day - can it be used in the context of experiment control? One aspect is distribution of sensor data and status information from "slow equipment" with relatively low update rates to a group of stakeholders in regular intervals. A service that is well suited for this form one-to-many communication is [Twitter][1]. Users can send messages, so called "Tweets", about which all "followers" of the author are notified. The length of messages is severely limited, but images can be included in the form of a short URL. Twitter offers an [API][2] that can be used through a [Python package][3], so that messages can be created and sent out programatically. A home made sourdough fermenter serves as a model system to test this approach. The fermenter consists of a styrofoam box with two 7W heaters that can be switched on and off via USB-controllable 240 VAC plugs and a DS18B20 temperature sensor. The hardware is connected to a Raspberry Pi Model B, which is running [NICOS][4], a Python based open source experiment control system developed at MLZ. Modules to communicate with the hardware have been added to the system as well as a module that can publish sensor readings over Twitter in regular, user specifiable intervals. Whenever the fermenter is loaded with a batch of sourdough, the temperature and heater status is made available to followers of [@Gaehrold][5], an account which was created for this purpose. Twitter also has a private messaging feature, where users can communicate in one-to-one channels. Another module has been added to NICOS that listens to these messages, executes contained commands and replies with results. The module allows simple access control based on Twitter user names, so that commands are only accepted from a limited group of people. [1]: https://twitter.com/ [2]: https://dev.twitter.com/rest/public [3]: https://github.com/bear/python-twitter [4]: http://nicos-controls.org/ [5]: https://twitter.com/Gaehrold
        Speaker: Michael Wedel (European Spallation Source ERIC)
      • 88
        Safety systems at the SAFARI-1 Neutron Diffraction Facility
        Personnel safety is of primary importance at beam-line facilities at nuclear research installations which includes the establishment of a safe working environment and minimizing the risk of accidental exposure to ionizing radiation [1]. This can be achieved through robust engineering design of independent safety systems as well as following pre-determined safe working procedures. The SAFARI-1 Neutron Diffraction Facility accommodates two instruments on one reactor beam line. Thus many key features such as the in-pile collimator with its beam shaper, filtering system and the primary beam shutter are shared. From here on each facility functions independently with their own secondary beam shutters. To minimise potential risks that these secondary beam shutters may inadvertently be operated from the control console, key passive safety systems were implemented. This presentation will highlight some of the key safety features which include the interlock system of the primary and secondary beam shutters. The performance of the neutron beam stops will also be presented. **References** [1] South Africa. (2006). National nuclear regulator act (Act no 47 of 1999): on safety standards and regulatory practices. (Government notice no. R388). Government gazette, 28755, 28 April.
        Speaker: Mr Deon Marais (Necsa SOC Limited)
        Proceedings
        Slides
      • 89
        The State of NeXus
        This is the traditional NOBUGS update on the NeXus data format for neutron, muon and x-ray science. With steadily increasing adoption at facilities, NeXus is facing a number of challenges between developing the standard for new and future experiments while maintaining backward compatibility to data collected in the past. There are also different use cases that call for flexible and human readable storage and access on one hand and more rigid, machine parseable and automatable schemas on the other. At the moment the NeXus community is gathering and evaluating a number of technical solutions to address these issues. This includes work towards a versioning infrastructure for NeXus definitions, making definitions more modular or using object orientation to the same end. This presentation will provide an overview on the current status of those activities and will also highlight a few topics around the use of NeXus for high data rate crystallography, the NeXus-API and manual, we all as other business that may arise from the meeting of the NeXus committee in the week before NOBUGS. http://www.nexusformat.org
        Speaker: Tobias Richter (European Spallation Source ERIC)
    • 10:00 AM
      Coffee Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
    • Contributions and Conference Closing Marble Hall

      Marble Hall

      Copenhagen University

      Thorvaldsensvej 40
      • 90
        Community Driven Scientific Software Projects: Lessons Learned on Tools and Practices
        On the one hand, science is about openness and collaboration and, on the other hand, open and community-driven software projects have demonstrated to yield more generic and resilient solutions thanks to the diverse users’ needs and feedback. Yet, the development of many scientific software projects (even free/open-source ones) is still managed in a closed way, typically by one or few people from a single institution. In some cases this is just due to the lack of knowledge or confidence on the already available tools and practices for collaborative development. In this work we share our experience on coordinating the transition of the Taurus and Sardana projects from being in-house developments to being driven by an international and diverse community. The initially established rules are constantly evolving aiming to reach continuous delivery while maintaining good software quality. The selected contribution workflows, testing strategies as well as the software and documentation delivery tools are described in detail, and the benefits of this kind of organization as well as potential pitfalls and lessons learned are discussed.
        Speaker: Mr Carlos Pascual-Izarra (Alba Synchrotron)
        Paper
        Slides
      • 91
        Communicating within a Distributed Team
        Collaborative software provides for improved community engagement, better reliability, and continued development. These benefits come with additional costs to effective communication. For a project developed by a distributed team to be successful, it must adopt strategies to overcome communication issues. Specifically, an effective distributed team must communicate where the project currently is (distribution model, packaging, documentation), where it is going (project plans, issue tracking, milestones, release dates), where it has been (source control system, previous versions), and its values (scope and direction). This presentation will explore a variety of open source projects, how they communicate, and practical suggestions for improving communication in your projects.
        Speaker: Dr Peter Peterson (Oak Ridge National Laboratory)
        Slides
      • 92
        Test Driven GUI Development
        From its inception, software quality has been at the forefront of concerns in Mantid[1], a scientific framework for reduction of neutron scattering data. The Mantid project was implemented with a long term future in mind, and the need to maintain quality and robustness while continuing to develop the framework has been a cornerstone in the project. As such, we have developed a comprehensive and robust automated testing framework of many thousands of unit tests covering all aspects of the underlying framework and algorithms, which are supplemented by hundreds of automated system tests that verify full workflow on real data volumes. This has proved invaluable to ensuring that core aspects of the application are thoroughly exercised and validated before any change is accepted. However, graphical user interfaces (GUIs), which provide an important front-end to many of Mantid's users, have historically been hard to test, needing specialised commercial testing frameworks with scripts that can often be fragile to further development changes. In addition, traditional approaches to generating such interfaces often introduce long-term problems into frameworks of this type. In the past, manual testing has been the solution in these areas, however this is both expensive in a large project when testing must be repeated with each release, and prone to missing errors. Indeed, the user interface has been the site of the majority of bugs found after release in Mantid for many years. Recently, we have begun to build new interfaces using the Model-View-Presenter (MVP) design pattern, which originates from a family of designs, in which view and presentation logic are entirely separated. The presentation logic is naturally made available for unit testing, and we can therefore automate our testing scenarios, provide wide test coverage, and test at speeds unobtainable via GUI testing frameworks. Test driven development complements MVP approaches, and we find that applications prove to be much more robust and reliable, even while under significant development. Indeed this design pattern can be applied at a lower level, recently several technique areas have requested an enhanced batch processing capability within their user interfaces. Therefore it makes sense to build a general data reduction widget that can be inserted into Mantid user interfaces, allowing developers to share the maintenance and development effort involved in taking care of UIs. This new “DataProcessor” widget requires little information about the specific technique area, providing a neat and concise way to execute complex batch-processing via "DataProcessorAlgorithms”, and as it uses the MVP pattern, it comes with a full set of unit tests that ensure its long term maintainability and robustness against code changes, across all the interfaces that use it.
        Speaker: Raquel Alvarez (ISIS, STFC)
      • 93
        Plankton: A Stateful Device Simulation Framework
        User-facing instrument control software will be a critical component to the operation of the European Spallation Source (ESS) under construction in Sweden. The control software must be robust, proven, and well tested for day-one operations. A sequential approach to development, whereby the target hardware is procured, installed and made operational prior to the development of the control software will undoubtedly result in late delivery and schedule blockages further down the line. To avoid a schedule conflict, the control software must be developed in parallel to the target hardware and drivers. Furthermore, deep testing of the control software will involve driving the hardware to its limits, such that critical error handling mechanisms can be exercised. For expensive hardware, testing of this form would be undesirable, even if the target devices are available. The distributed nature of the software team, also presents a challenge, given that full replicas of all hardware will not be made available at all development locations. Given these challenges, in order to progress development of control software for the ESS, a way to simulate devices and controllers is required. To enable simulating complex behaviour of many devices using a standardised approach, we have developed a deterministic, statemachine-based device simulation framework. This ensures that common functionality is implemented once and thoroughly tested, encourages simulators to follow a common design, and allows capturing device behaviour in great detail with relatively little development effort. We chose to develop in Python for speed of implementation and accessibility, to allow users to easily extend functionality, and add their own devices. Implementation of device behaviour has been separated from communication protocol to allow simulation both at the controller level via EPICS[1], and at the device level using various low-level protocols. It is often useful to vary aspects of device behaviour for a particular use-case, and we have therefore provided a simple way to extend and customise a device based on an existing one. Docker[2] images are provided for easy deployment and to simulate a network of devices on a single machine. Our primary aim is to accelerate initial development of control software, in absence of hardware to test against. A library of simulators will also be useful for automated unit and system tests as development progresses. It may also prove useful for validating user scripts by performing a simulated dry run in more detail that would otherwise be possible. References: - [1] EPICS: http://www.aps.anl.gov/epics/ - [2] Docker: https://www.docker.com/
        Speaker: Mr Michael Hart (STFC)
        Slides
      • 94
        ESS Event Data Streaming
        At the ESS, data from each instrument will need to be transported elsewhere for use by various software packages which will perform tasks such as analysing the data, providing live visualisation or writing data files. To do this, an ESS and ISIS in-kind collaboration will implement a robust and high throughput streaming system. The data stream will largely comprise neutron events and efficiently encoding a compact, serialised message for sending this information down the wire will be of high importance. To achieve this, we plan to use Google Flatbuffers for serialisation and base our data streaming software on Apache Kafka. In this short talk I will briefly introduce these two technologies and present results of event data streaming tests we have carried out so far.
        Speaker: Dr Matthew Jones (STFC, Tessella)
      • 95
        Virtual Large Facility for Experiment-based Structural Biologists
        CCP4-DaaS is a research infrastructure project undertaken by the UK [Science and Technology Facilities Council][1] (STFC) to provide centralised computational facilities targeting the [CCP4][2] community of researchers in macromolecular crystallography. The project implements a Data Analysis as a Service (DaaS) framework, building on the [ICAT][3] stable of software tools for managing Large Facility experimental data by linking all aspects of the research chain from proposal through to publication. DaaS is a response to the evolving landscape of Large Facility-oriented research whereby advances in technologies and research methodologies continue to push up the volume of data being generated and consumed in analyses ([RCUK UK Large Research Facilities Roadmap 2010][4]). Conceptually, DaaS is a virtual Large Facility in the cloud that connects distributed data and compute resources to enable users to perform compute- as well as data-intensive analyses both during and post visit to STFC Large Facilities. In the CCP4-DaaS implementation, we adopted the [OpenNebula][5] cloud managed by STFC’s [Tier1][6] Data Centre as the cloud platform. Registered CCP4-DaaS users can access CCP4 virtual machines via a remote desktop or a browser client. The CCP4 VMs are configured at the contextualisation stage which mounts the user’s persistent file share and provides a set of standard software tools including the CCP4 integrated suite of programs. The latter has been enhanced to permit submission of compute jobs to remote clusters, with the STFC-based [SCARF][7] resource as a pre-configured option. CCP4-DaaS represents a specific implementation of DaaS by STFC to deliver a centralised data analysis service on the cloud to the CCP4 user community. This project is part of STFC [Scientific Computing Department][8]’s larger facilities development programme which is building similar DaaS platforms for users of [ISIS][9], [OCTOPUS][10] and other Large Facilities as a continuous improvement to STFC research facilities. Beyond STFC, linking these DaaS platforms with the new [West-Life][11] virtual research environment for structural biology is also under consideration. [1]: http://www.stfc.ac.uk/ [2]: http://www.ccp4.ac.uk/index.php [3]: https://icatproject.org/ [4]: http://www.rcuk.ac.uk/documents/research/rcuklargefacilitiesroadmap2010-pdf/ [5]: http://opennebula.org/ [6]: https://www.gridpp.ac.uk/ [7]: http://www.scarf.rl.ac.uk/ [8]: http://www.scd.stfc.ac.uk/SCD/default.aspx [9]: http://www.isis.stfc.ac.uk/index.html [10]: http://www.clf.stfc.ac.uk/CLF/Facilities/Octopus+capabilities/14219.aspx [11]: http://west-life.eu/
        Speaker: Dr Shirley Crompton (Science and Technology Facilities Council)
      • 96
        Conference Close
        Poster Prize Next Planned Venue Summary Announcements for Tours
    • 12:00 PM
      Busses to Lund
    • MAX IV and ESS site tours ESS and MAX IV (Lund)

      ESS and MAX IV

      Lund

    • 4:00 PM
      Busses back to the Airport and Copenhagen