Computational Mathematics with Jupyter workshop

Back in mid-January three members of the University of Sheffield's Research Software Engineering Team (me, Mike Croucher and Tania Allard) spent a week at a Computational Mathematics with Jupyter workshop, hosted at Edinburgh's International Centre for Mathematical Sciences.

This brought together the many members of the consortium working on the OpenDreamKit Horizon 2020 European Research Infrastructure project. The overall aim of the project is broad (to further the open-source computational mathematics ecosystem) so it was unsurprising that the collective experience of the attendees was too. The attendees generally fell into one of the following four camps:

  • Researchers interested in solving Group Theory and Semigroup problems using the GAP software, some of which were involved with developing a Jupyter kernel for GAP;
  • Others interested in the SageMath computational mathematics ecosystem (which is particularly strong for computational algebra) and working on a Jupyter kernel for it;
  • Research Software Engineers working on interactive widgets, visualisation tools and workflow tools for Jupyter;
  • People with experience/interest in using computational mathematics tools for teaching purposes.
The attendees of the workshop

The attendees of the workshop.

The structure was different from conferences I'd attended previously: for each of the five days we listened and debated presentations in the morning then busied ourselves with code sprints in the afternoons.

Correctness, sustainability and human fallibility

Our own Mike Croucher kicked things off by asking Is your research software correct? in which he presented Croucher's Law:

I can be an idiot and will make mistakes.

with the corollary that

You are no different!

He argued, convincingly, that in our research we therefore need to put in place safeguards to lessen the chance and impact of mistakes, and proposed the following as partial solutions:

  • Automate (aka learn to program)
  • Write code in a (very) high-level language
  • Get some training
  • Use version control
  • Get a code buddy (Maybe an RSE!)
  • Share your code and data openly
  • Use literate computing technologies
  • Write tests
  • Cite code

Raniere Silva from the Software Sustainability Institute followed on with a complementary talk on how to make computational mathematics software more sustainable. He commented that odd numerical bugs can easily creep in over time (e.g. differing floating point behaviour between Python 2 and 3) but that we can maintain confidence in software using version control, continuous integration, good documentation, tutorials, knowledge bases, instant messaging and by developing communities around the software we value.

Alexander Konovalov then talked about a particular case of making research software more sustainable and portable: he's been using Docker containers to run GAP. This lead into a discussion on whether Docker is a sensible solution for archiving/reproducing workflows: will it be around in ten years' time? Those interested in that particular issue might benefit from attending the forthcoming Software Sustainability Institute workshop on Docker Containers for Reproducible Research.

Jupyter: what is it and how can we diff/merge/test Notebooks?

We were then given what was pitched as a 'general introduction' to Jupyter but ended up covering much more ground than anticipated, largely due to the speaker being Thomas Kluyver, one of the core IPython developers (who happens to have gained his PhD from the University of Sheffield). Thomas talked about the most significant features (literate programming environments; the power and versatility of using the browser as a REPL; Jupyter's client-server architecture) but also touched upon various tools and platforms that have built on Jupyter including:

  • JupyterHub: a multi-user hub which "spawns, manages, and proxies multiple instances of the single-user Jupyter Notebook server". One of the University of Sheffield's deliverables for the OpenDreamKit project is to get this running on our own computer clusters so users dictate what resources (e.g. cores, memory, GPUs) they want when they start a single-user Notebook session;
  • tmpnb: a JupyterHub-like system for launching temporary single-user Notebook sessions in the cloud (which are each backed by a Docker container). tmpnb powers https://try.jupyter.org/ (hosted by RackSpace), which allows people to briefly test out Jupyter without installing anything locally;
  • binder: a tool for turning a GitHub repository into a collection of interactive Notebooks, configuring the required environment/dependencies using a Dockerfile, Python requirements.txt file or Conda environment file;
  • nbgrader: a tool for distributing coding and/or free text Notebook-based assignments then automatically or manually grading them. This can integrate with JupyterHub;
  • nbconvert: convert a Notebook to HTML/Markdown/PDF/scripts or a custom format (using a Jinja2 template). It is used by nbviewer.jupyter.org to create online static HTML views of Notebooks. nbconvert is also used by GitHub for rendering Notebooks;
  • nbparameterise: Often you want design Notebooks to demonstrate/explore the impact of a small number of key variables. This project of Thomas's allows such variables to be set in the first code cell then the entire Notebook can be run non-interactively and rendered to HTML;
  • jupyterlab: This will be the next iteration of Jupyter's UI: rather than exclusively displaying a terminal, Notebook or file editor in the Jupyter interface, instead a multi-tab and multi-pane interface allows you to view and interact with several of these things at once. It will therefore look and feel a bit more like Spyder/R Studio/MATLAB but this is no bad thing as all of those make good use of screen real estate provided by the wide monitors we all have these days.
JupyterLab: the future of the Jupyter interface

JupyterLab: the future of the Jupyter interface.

Given the enormity of the Jupyter ecosystem and how quickly it has grown it was great to hear from a core developer which related projects he thinks are the most significant and interesting!

Next up, Vidar Fauske gave this talk on nbdime, a new tool for merging and diffing Jupyter Notebooks. The backstory is that for some time we've been recommending Jupyter to those wanting to start using Python or R in their research and we've also been telling everyone to use version control but the diffing and merging tools typically used with version control systems don't work well with Notebooks as they

  • Operate on lines without consideration of whether a file has a nested structure (JSON in the case of Notebooks);
  • Base64-encoded binary objects in Notebooks are naively treated in just the same way as text;
  • No logic for omitting certain entities (execution counters; cell outputs) from version control (although the wonderful nbstripout can handle both of these cases when triggered by a git hook).

Ultimately visualising the differences between two Notebooks and merging Notebooks in sensible, useful ways really requires that the tools that perform these functions have some understanding of the structure and purpose of Notebooks: nbdime has that awareness:

  • The major unit for merging/diffing is the cell, the line.
  • Input cell merging is string merging whereas
  • Cell outputs are treated as atomic: they match or they don't.
  • Execution counts are sensibly ignored by default.

nbdime provides a core library, plus command-line and browser interfaces for diffing and merging.

Overall, I'm massively excited about nbdime for facilitating much slicker Notebook-based version controlled workflows and hope it sees widespread adoption and promotion by the likes of Software Carpentry.

nbdime's nbdiff tool for viewing the differences between two Notebooks

nbdime's nbdiff tool for viewing the differences between two Notebooks.

Hans Fangohr then introduced nbval, a new tool for automating the valdation of Jupyter Notebooks. This could give researchers greater confidence in their workflows: does a demonstrative Notebook still give the same answers if re-run after making changes to the Notebook's environment (e.g. the package dependencies)?

nbval, a pytest plug-in, works as follows: it creates a copy of a Notebook file, executes the copy in the current Python environment, saves the copy Notebook with its new cell outputs then compares the outputs of the two Notebooks. There are some nice features to control the granularity of testing: flags can be set so certain cells are run but not tested; regexes can be used to ignore oft-changing output strings (e.g. paths, timestamps, memory addresses). Images and LaTeX can't be handled yet.

Again, I'm exited about this new tool: being able to package both workflow documentation and regression/ acceptance tests as Notebooks is a great idea. Note that at present both nbdime and nbval include mechanisms for comparing Notebooks but are presently separate projects. It will be interesting to see if there's any convergence in future.

Interactive widgets in Notebooks

We were treated to two talks on the ipywidgets package, which provides Python and Javascript-backed widgets for interacting with Notebooks e.g. sliders for assessing the impact of model parameters on trends in embedded matplotlib plots.

First, Jeroen Demeyer introduced us to the high-level interact Python decorator function and interactive class one can use to control function inputs using a HTML+Javascript widget. He then went on to explain how one can manually reproduce the magic of these mechanisms: you instantiate some (typed) input widgets and output widgets, add them to an on-screen container then associate each input widget with a callback.

Next, Sylvain Corlay talked about the ipywidgets ecosystem and the future direction of the project. He mentioned several projects that have built on ipywidgets, all of which sound exciting but none of which I'd heard of before this!

  • bqplot: a matplotlib alternative that supports the same API, uses custom ipywidgets and behind the scenes uses d3.js for low-level drawing;
  • pythreejs: this exposes the API of the three.js Javascript/WebGL 3D library to Python; this is a low-level API, not a Python plotting library.
  • ipyleaflet: a GIS plotting library that uses ipywidgets and the Leaflet Javascript library.
  • widget-cookiecutter: a template for creating custom ipywidgets.

The current version of ipywidgets, released since the workshop, includes some interesting developments: much more of the code is now written in Javascript (actually Typescript) rather than Python so widgets state is maintained in JavaScript-land: widgets can therefore now be rendered and manipulated without a Jupyter kernel! See this statically-rendered Notebook on GitHub as an example. Another advantage of migrating the bulk of the code to Javascript is that the widgets should be usable with kernel languages other than Python such as R (once people have written language-specific ipywidgets backends).

Separate to ipywidgets, we were also introduced to SciviJS, a tool currently being developed by Martin Renou at LogiLab for visualising 3D mesh-based geometries in a Juypter Notebook. It uses also uses WebGL / three.js for rendering so is rather performant. I can see some ex-colleagues in civil engineering really liking this. Check out the online demo.

Numbas for online computer-aided assessment (CAS)

Numbas is a open web-based system for formative and summative maths and science tests. It is being developed by Christian Lawson-Perfect from the University of Newcastle's Maths and Stats E-Learning unit. It's very different to teaching environments that use Jupyter (e.g. SageMathCloud) as almost all the code is self-contained HTML+Javascript that is run on the client (for scalability and resilience) and it is for generating closed tests (rather than open mathematical exercises). Looks very attractive and intuitive from the user's perspective!

Christian also mentioned Up For Grabs, a site of projects wanting help on simpler tasks. He says it's a good and simple way of getting less experienced developers involved with open-source projects. As a project maintainer you upload some blurb about your project and tell the site which GitHub Issue tag(s) indicate smaller tasks that are 'up for grabs'.

Case studies of Jupyter usage

Hans Fangohr from the University of Southampton reported on using Python and Jupyter to encapsulate multi-stage micro-magnetism modelling workflows: his team have been able to automate the generation of input files and processing of output files for/from old but robust modelling software (OOMMF); Jupyter then further masks away the complexities of running models.

Mark Quinn then talked about the impact that SageMathCloud, an online teaching environment which uses Jupyter, has had on the teaching of physics, astronomy and coding at the University of Sheffield. He's been working with Mike Croucher to develop SageMathCloud courses for the Physics department with the goal of introducing effective programming tuition early in undergraduate Physics degree programmes. He's now quite a fan of the used coding environment (Jupyter) and SageMathCloud's courseware tools (chat facilities and mechanisms for setting and grading assignments) but has now been using it long enough to identify some challenges/issues too (e.g. students getting confused about the order of execution of cells; students opening many notebooks at once (each of which has a resource footprint).

Mark is involved with the Shepherd Group, who research the efficacy of teaching methods and are based in the same Physics department. They've recently been studying the impact of using the Jupyter Notebook to undergraduate students who had and hadn't studied Physics at A-Level. They tested students (at different levels of Bloom's Taxonomy) before and after teaching and concluded that the Notebooks were suitable for aiding students, regardless of whether they had a Physics background. Hopefully the Software Sustainability Institute can lend their support to pedagogical studies of this nature in future.

Other talks

I should note that there were also a number of other talks that focussed on the GAP and SageMath computational mathematics software packages: I've deliberately not mentioned them here so as not to expose my lack of understanding of group theory and semi-groups and also this post is long enough already! See the full programme for info on things I've neglected plus links to the presentations.

Culture

This was the first time I'd been to a conference where the emphasis was very much on sharing ideas and working together: the academic conferences I'd attended prior to this had previously had an air of competition about them. Looking forward to meeting up with the OpenDreamKit gang again!

A new member of the team: Tania Allard

About my research

I recently completed a PhD in Materials Science at the University of Manchester which focused on computational nanomechanics. The primary goal was to develop a robust characterisation technique for very small volumes of biocompatible materials and biological tissues.

Since such materials exhibit highly complex mechanical responses, the extraction of the values of constitutive parameters from test outputs is not straightforward and often requires inverse analysis. For such purposes I used an iterative Finite Element Analysis approach to extract meaningful constitutive parameters from the experimental data.

The Finite Element simulations were performed using ABAQUS while the optimisation based iterative approach was enforced by a series of codes in MATLAB (MATLAB was chosen as it provides a compatible interface to FE codes via multiple programming languages) and Python. The material constitutive laws were prescribed using either user-developed Fortran subroutines or Abaqus-built-in material models. For the case of hydrated materials an additional Fortran subroutine for surface flow conditions was used.

The workflow was as follows:

  • Python scripts were used to generate the Abaqus input files with user provided variables (e.g. geometrical and boundary conditions identical to the experimental set up)
  • The mean experimental response was fit to an analytical expression for time-dependent creep using MATLAB's lsqnonlin algorithm. The parameters obtained from the initial fit were then used as the initial guesses for the optimisation algorithm, after which the Fortran (UMAT) subroutine and/or the input files were updated.
  • The FEA was performed and upon completion the relevant data was extracted using a Python script.
  • The MATLAB code was then used to fit the data obtained from FEA to the experimental observations, the parameters of the constitutive model were adjusted by means of the lsqnonlin algorithm. The quality of the parameter set was evaluated by the minimization of the root mean square error between the experimental and numerical data.
  • The parameters are further iteratively refined until the objective function satisfied a given convergence criterion.

Once the constitutive material parameters were obtained different parametric studies were performed using the HPC facilites at the University of Manchester (e.g. to study the effect of sample thickness, water content, and support material on the mechanical response).

Research Software Engineering

I believe there are various reasons which led me to pursue a career as a RSE. While doing my PhD I realized how important research software is, especially when are dealing with highly complex physical systems and when the use of experimental techniques is not enough/too complex/to expensive/unsuitable for what you are studying. Also, I became very frustrated by the lack of Open Source software in my area, especially when we contacted researchers in other institutes, which demanded us signing a waiver to have access to their software. Whenever I found or were passed codes, scripts, or subroutines which would "help" with my research I spent an incredible amount of time going through badly commented (if commented at all!), badly documented scripts with no version control whatsoever. I then imagined it could not only be me getting this frustrated and wasting valuable time trying to make sense of poor code so I started using version control (and forcing people in my lab to do it) and producing code that could be passed to others (more than likely people in my lab).

Eventually I realized this was a bit of what RSE's do, and it turned out I enjoyed a bit more the software development side of research than the experimental bits, so pursuing a RSE career was pretty much a natural thing to do (and I suppose I wanted to prevent people from getting frustrated when accessing others' scripts). Also, I started realizing the RSE community in the UK is relatively small (albeit constantly growing) so when I saw the advertisement for the position at Sheffield I asked a couple of people at the University of Manchester if they knew the team, I received good comments (mainly on how enthusiastic Mike Croucher is). I did my own research about the University, the RSE team, the projects in different groups, and it seemed both the University and the RSE would provide not only interesting projects to work on but also valuable insight (and mentoring) from experienced RSE's. Also, after realizing that the team was also quite small I figured it would allow for plenty of opportunities to learn loads of new skills while using my current expertise.

Last year I volunteered at the national RSE conference, I thought this would be an excellent opportunity to get to know the community, talk to RSE's from different places/universities about the projects they do and why they pursued a career like this. It definitely opened my eyes to the diversity of projects they work in and how collaborative this environment actually is, and if anything it just made me feel more excited and confident about my career post-PhD.

So when I had the chance to be in the committee for the RSE2017 conference I decided to get involved. Last year was a great experience for me and I think I might have one or two ideas to make this year's event better (even if only a little).

So far, my experience as a member of the RSE community has been quite pleasant. We always hear about the computer science and STEM communities being not so diverse, but I can see many groups working hard to be more inclusive and working hard to support junior RSE's, like me. The community is filled with very enthusiastic people, often working in very very interesting stuff. The Sheffield RSE team has been very welcoming and supportive over the few months I have been there, so I can truly say that I am very happy to be part of this team.

I am not 100% sure what my future career looks like, but I would certainly like to help raise awareness of how important software actually is for research, and how important it is for that software to be developed under good practices and with sufficient resources. Many people are aware now how important open data sources are, and I hope people would see research code in a similar way, that it needs to be open and made available for whoever needs it, or just to demonstrate how reproducible their studies are. So I believe I will be making my part by setting/maintaining software standards within the RSE team and spreading the word. Also, I am massively interested in the so-called big-data/data science areas so I would definitely like to get involved in more projects concerning those areas.

Will Furnass joins to work on Jupyter and Grid Engine integration

Will Furnass

The Research Software Engineering team at the University of Sheffield has gained a new member! I joined at the start of January and will primarily be working on OpenDreamKit which is a Horizon 2020 European Research Infrastructure project with the aim of furthering the open-source computational mathematics ecosystem.

My contribution to this project is to extend work previously started at the University of Sheffield to allow researchers to more easily run interactive workflows on High-Performance Computing clusters, specifically to make it easy, robust and intuitive to run Jupyter Notebooks on clusters running job scheduling software from the Grid Engine family.

Jupyter Notebooks are runnable documents containing code snippets that are viewed and manipulated from a web browser. They are an increasingly popular way of encapsulating, presenting and sharing a coding-oriented workflow. A Notebook comprises a column of cells, where each cell can contain:

  • some code or
  • explanatory text (that can be formatted using Markdown) and/or mathematical expressions (formatted using MathJax).

When a code cell is executed by the user it can return anything renderable by a modern web browser:

  • a single value,
  • a table of data,
  • a figure or
  • a mathematical expression.

For example:

/images/jupyter_notebook_example.png

The code cells of a Notebook can be (re)run in any order, so Notebooks are very useful for interactive exploration.

The structure of Jupyter is typically as follows:

[Notebook in browser] <---> [Jupyter server] <---> [Kernel]

where the kernel is the part that executes code cells. There are kernels for many different programming languages!

The server and kernel can run on the same machine as the web browser but the architecture allows them to also run on remote machines. These remote systems could be:

  • a research group's central server,
  • a Jupyter-aware cloud service (e.g. SageMathCloud or Azure Notebooks) or ...
  • the HPC clusters operated by so many academic institutions.

This is where my part of OpenDreamKit comes in. Computer clusters such as Iceberg and ShARC here at the University of Sheffield allow users to run computational jobs with more resources than typically available in researchers' own machines. Jobs can have parallel threads of execution running on up to sixteen cores per node and/or running over multiple nodes, jobs can use hundreds of MB of RAM and can make use of the latest generation of GPUs for things like accelerated deep learning workflows. However, the need to request resources, then submit and monitor jobs from the command-line can be a steep barrier to entry for some. Being able to easily run Jupyter Notebooks on our clusters and request the necessary resources for our interactive explorations via an intuitive web interface could help make HPC more accessible and useful to those without a strong understanding of Linux and the command-line.

We already have an instance of JupyterHub running to allow users to start Jupyter sessions on our Iceberg cluster thanks to the efforts of Stuart Mumford. I will be working on:

  • Upgrading this to use the latest version of JupyterHub;
  • Setting up JupyterHub on our new cluster (ShARC);
  • Developing a mechanism for easily requesting resources (more RAM / CPU cores / GPUs) from the Grid Engine scheduler;
  • Making the JupyterHub and Grid Engine integration more robust.
  • Looking at how JupyterHub could be set up on HPC clusters at other institutions (possibly using different schedulers) for research/teaching.

I'm rather excited about this new role. One nice aspect to it is that I am now according to my contract officially a Research Software Engineer:

Dear Dr Furnass

Further to recent discussions, I am pleased to confirm the change in your appointment with the University of Sheffield. The details of your offer are provided below:

Appointment Details: You, Dr William Furnass, shall be employed by the University of Sheffield as a Research Software Engineer in the Department of Computer Science with effect from 1 January 2017. This position is offered on a fixed term basis.

This demonstrates that research institutions have started recognising Research Software Engineering as an alternative career path in academia (something the Software Sustainability Institute have been pushing for for some time) and RSEs aren't simply post-doctoral researchers who happen to write software.

The path to this point has not been particularly direct: I have a computer scence degree, worked as a IT systems engineer in the film industry, have a PhD plus post-doc experience in water engineering (where I developed semi-physical and data-driven models of water quality in water distribution networks) and I have provided support to the users of the University of Sheffield's HPC clusters. In addition I taught or helped run RSE, water engineering and study skills workshops.

My interests include helping researchers optimise data analysis workflows (primarily using higher-level languages), providing training in RSE best practices and systems administration. You can contact me via:

  • Email: w.furnass (at) sheffield.ac.uk
  • Twitter: @willfurnass

£1 million grant to shed light on how we learn languages

A £1 million grant to help researchers understand what speakers know about languages, in order to help make learning foreign languages easier, has been awarded to the University of Sheffield's Faculty of Arts and Humanities.

Over five years, the Research Leadership Award from the Leverhulme Trust will allow experts to develop new, accurate ways of describing speakers’ linguistic knowledge, by using machine-learning techniques that mimic the way in which humans learn.

The patterns they find will be verified in laboratory settings and then tested on adult foreign language learners to see if such patterns can help them learn a foreign language in a way that resembles how they learned their mother tongue.

The aim is to lead a step-change in research on language and language learning by capturing the linguistic knowledge adult speakers build up when they are exposed to a language in natural settings. These insights will help with the development of strategic language teaching materials to transform the way in which we teach foreign languages.

The team will be led by Dr Dagmar Divjak from the University’s School of Languages and Cultures, in close collaboration with Dr Petar Milin, Department of Journalism Studies, and with Research Software Engineering support from Dr Mike Croucher, Department of Computer Science.

Sheffield's Research Software Engineering Group are collaborators on the project and will provide support in High Performance Computing, software engineering and data management. This will help ensure that all developed software is efficient, correct, citable, easy to use and openly available. The aim is to maximise research impact and reproducibility through the application of modern software engineering methodologies.

The out of our minds team

Bashing down Windows for Materials Science

In the last few months Windows 10 has had an interesting new capability – Bash. Originally the Linux Subsystem was only available for those on the developer loop, but since the Windows 10 Anniversary edition this subsystem has been available to all users who activate it. The subsystem is not an emulator, but a way for Windows 10 to run Linux applications, and to use the Linux Bash environment, through the use of dynamic maps between Linux system calls and Windows ones.

As a computational chemist working in the Department of Materials Science and Engineering this is really an excellent and exciting new way that Windows has evolved. There are a great many tools for my research. Some work on Windows, and are well designed for that OS, given that they are applications aimed at the people that make and analyse their materials. These tools help users visualize crystal structures in 3D, or predict from crystal structures experimental observables, such as transition electron microscopy. For computational chemists, these tools are often also invaluable as they allow us to construct visually the crystal structures that we wish to then simulate using quantum mechanics or classical force fields. More often than not, the programs designed for running such chemical simulations, have no GUI, and run in a Unix environment. CASTEP is a UK created Density Functional Theory simulation package, which is free for all UK academics, and is used extensively by those researchers wishing to simulated solid state materials, such as batteries, piezoelectric materials, and solar power materials. Previously, to run CASTEP on a Windows machine, Cygwin or a virtual machine were required. However, with the new subsystem, CASTEP installs out of the box as if you were running any other Linux computer. The same is equally true of GULP, another program used in materials science, which is often used to design, test, and analyse atomistic potentials. DL_POLY, another UK created simulation package is also used by a large user base to perform molecular dynamics simulations using atomistic potentials.

All of the above programs mentioned, and many more, such as the DFT codes VASP and WIEN2K, and other molecular dynamics programs such as GROMACS, and LAMMPS, can have their output analysed by these Windows 10 packages, and their inputs easily designed by these same crystal analysis programs, but natively are best run in a Unix environment.

The typical work around has always been either the use a virtual machine, Cygwin, or, using more expensive Apple computers, or making users use Linux machines for which they may not be comfortable using – especially if their previous workflow used packages that ran on Windows.

Personally I fall into that last category of users. While I can write a paper in LaTeX, I really don’t like it compared to the WYSIWYG world of Word, and of course with word I can use my favourite citation manager, Zotero (which by the way the work around using Dropbox is also good fun). That impact on workflow is an important thing, especially if you are dealing with final year students who you want to work on your research. Ideally you want to get them up and running ASAP where the only teaching you need to do is how to run the simulation packages. I don’t want to have to teach them how to use and entirely new OS, and in the case of Linux, perhaps entirely new ways to write documents and make spreadsheets. This is especially true if the university course from the first year onwards has included access to MS Office, and has done teaching using those tools.

By being able to now run many of these simulation packages through the Windows Bash Linux subsystem there are minimal hoops to jump through. All students now have easy access to a machine that can run the simulation programs, and without having to switch OS, or log into a dedicated UNIX server which is maintained for PhD and postdoc research. That lack of need to use a virtual machine, or emulator, also means much less impact on resources on personal machines, and less peculiarities with the allocation of computing resources on those machine. Furthermore, with respect to workflows, inputs and outputs from those simulation packages all can happen under the one roof of the Windows 10 OS, and lead to greater productivity.

Bash in Windows 10 has trampled down a barrier which makes the use of the OS far more competitive, cost effective and productive for computational chemistry.

A new member of the team: Mozhgan Kabiri Chimeh

My name is Mozhgan Kabiri Chimeh and I am a Research Associate/Research Software Engineer who specialises in performance acceleration targeting Many-core and Multi-core architectures. Research is my passion and I have carefully developed my education with research and teaching in mind. I completed my PhD in computer science in 2016 at the University of Glasgow where my area of research was accelerating logic gate circuit simulation targeting heterogeneous architectures. As part of my PhD project, I optimised and accelerated simulation algorithms and applied them to various parallel architectures (SIMD enabled machines, clusters, and GPUs). I have practical experience with parallel programming using High Performance Computing languages and models including OpenMP and CUDA.

I am glad to be a part of RSE team as well as working as a researcher in Computer Graphic and simulation modelling group here at the University of Sheffield. Feel free to get in touch with me via my email address (m.kabiri-chimeh (at) sheffield.ac.uk) or my LinkedIn.

When not working I divide my time between family, movie, artwork and macro-photography!

Manchester Julia Workshop

A few weeks ago (19-20th September 2016) I had the chance to attend the very first Julia workshop in the UK held at the University of Manchester by the SIAM Student Chapter. The first day of the workshop consisted of a basic tutorial of Julia, installation instructions and around five hours of hackathon. The second day provided an introduction to carrying out research in various fields such as data analysis, material science, natural language processing and bioinformatics using Julia. The attendees were a mixture of PhD students, post-docs and lecturers mainly from the University of Manchester as well as other universities and institutes (Warwick, Glasgow, Reading, MIT, Imperial College London, Earlham Institute).

Day 1: Tutorial and Hackathon

There are several ways to run Julia in any OS, including command line version, Juno IDE and Jupyter notebook (IJulia). In case you want to avoid any installation process then there is also the browser based JuliaBox.com. I was surprised that the whole process was smooth without any software setup issues!

The tutorial consisted of some very basic demonstration of Julia mostly on linear algebra and statistics and after a short break we were left to explore Julia, collaborate and exchange ideas. There were also two Exercises proposed to us:

  • First Steps With Julia by kaggle which teaches some basics of image processing and machine learning to identify the character from pictures.
  • Bio.jl Exercises by Ben J. Ward which provides simple examples of using the Bio.jl to do simple operations and manipulations of biological sequences.

As I wanted to try as many libraries as possible from image processing and data visualization to embedded Java, I ended up using a lot of different packages so I found these commands (self-explanatory) for package managing the most useful for me:Pkg.add("PackageName"), Pkg.status(), Pkg.update(). Here of course, I detected some compatibility issues. I was running Julia version 0.4.6 but it appears that most of the attendees were using the version 0.4.5. Some commands seemed to have changed between these versions; for example in the kaggle's exercise the command float32sc(img) which converts an image to float values was not working for me instead I had to use the float32(img) command. A minor issue for a new-born language.

Day 2: Talks

The talks were centred around specific fields with heavy scientific computing (automatic differentiation, molecular modelling, natural language processing, bioinformatics and computational biology) and how Julia influence these fields. Each speaker presented his field of research and his Julia implementations which ended up as another package for the Julia community. More information about the speakers can be found on the Manchester Julia Workshop webpage and a list of the presented packages can be found below:

Final words

Overall I was very satisfied with the Julia experience and I am waiting for its first official release (v1.0) which will probably be next year. Here are the main advantages which led me to believe that Julia can be the next on demand programming language for scientific computing:

  • Combines the productivity of dynamic languages (Java, Python) and the performance of static languages (C, Fortran). In other words: very easy to write optimized code and run your program fast at the same time. Dr Jiahao Chen from MIT in his talk mentioned the following referring to Julia's speed, "You can define many methods for a generic function. If the compiler can figure out exactly which method you need to use when you invoke a function, then it generates optimized code".
  • Deals with the two language problem: base library and functionality is written in Julia itself.
  • It is free and open source (MIT licensed), high advantageous for the scientific community to share code or expand existing one.
  • A great and friendly community and users from various fields which constantly expand the existing Julia library.

Fun fact: The system for variable declaration accepts any Unicode character: \delta[tab] = 2 results in δ = 2, \:smiley: = 4 results in 😃 = 4. Although, apart from some April Fool's pranks, Julia's stylistic conventions is advised to be followed when defining variable names!

Coffee and Cakes Event

RSE Sheffield is hosting its first coffee and cakes event on 4th October 2016 at 10:00 in the Ada Lovelace room on 1st floor of the Computer Science Department (Regents Court East). Attendance is free and you don't need to register (or bring coffee and cake with you). Simply call in and take the opportunity to come and have an informal chat about research software.

The event is a community event for anyone not just computer science or members of the RSE team. If you work on software development are an RSE or simply want to talk about some aspect of software or software in teaching then come along.

Accelerated versions of R for Iceberg

To Long; Didn't Read -- Summary

I've built a version of R on Iceberg that is faster than the standard version for various operations. Documentation is at http://docs.hpc.shef.ac.uk/en/latest/iceberg/software/apps/r.html.

If it works more quickly for you, or if you have problems, please let us know by emailing rse@sheffield.ac.uk

Background

I took over building R for Iceberg, Sheffield's High Performance Computing System, around a year ago and have been incrementally improving both the install and the documentation with every release. Something that's been bothering me for a while is the lack of optimisation. The standard Iceberg build uses an ancient version of the gcc compiler and (probably) unoptimised versions of BLAS and LAPCK.

BLAS and LAPACK are extremely important libraries -- they provide the code that programs such as R use for linear algebra: Matrix-Matrix multiplication, Cholesky decomposition, principle component analysis and so on. It's important to note that there are lots of implementations of BLAS and LAPACK: ATLAS, OpenBLAS and the Intel MKL are three well-known examples. Written in Fortran, the interfaces of all of these versions are identical, which means you can use them interchangeably, but the speed of the implementation can vary considerably.

The BLAS and LAPACK implementations on Iceberg are undocumented (before my time!) which means that we have no idea what we are dealing with. Perhaps they are optimised, perhaps not. I suspected 'not'.

Building R with the Intel Compiler and MKL

The Intel Compiler Suite often produces the fastest executables of all available compilers for any given piece of Fortran or C/C++ code. Additionally, the Intel MKL is probably the fastest implementation of BLAS and LAPACK available for Intel Hardware. As such, I've had Build R using Intel Compilers and MKL on my to-do list for some time.

Following a recent visit to the University of Lancaster, where they've been doing this for a while, I finally bit the bullet and produced some build-scripts. Thanks to Lancaster's Mike Pacey for help with this! There are two versions (links point to the exact commits that produced the builds used in this article):

The benchmark code is available in the Sheffield HPC examples repo https://github.com/mikecroucher/HPC_Examples/. The exact commit that produced these results is 35de11e

Testing

It's no good having fast builds of R if they give the wrong results! To make sure that everything is OK, I ran R's installation test suite and everything passed. If you have an account on iceberg, you can see the output from the test suite at /usr/local/packages6/apps/intel/15/R/sequential-3.3.1/install_logs/make_install_tests-R-3.3.1.log.

It's important to note that although the tests passed, there are differences in output between this build and the reference build that R's test suite is based on. This is due to a number of factors such as the fact that Floating point addition is not associative and that the signs of eigenvectors are arbitrary and so on.

A discussion around these differences and how they relate to R can be found on nabble.

How fast is it?

So is it worth it? I ran a benchmark called linear_algebra_bench.r that implemented 5 tests

  • MatMul - Multiplies two random 1000 x 5000 matrices together
  • Chol - Cholesky decomposition of a 5000 x 5000 random matrix
  • SVD - Singular Value Decompisition of a 10000 x 2000 random matrix
  • PCA - Principle component analysis of a 10000 x 2000 random matrix
  • LDA - A Linear Discriminant Analysis problem

Run time of these operations compared to Iceberg's standard install of R is shown in the table below.

Execution time in seconds (Mean of 5 independent runs)

MatMul Chol SVD PCA LDA
Standard R 134.70 20.95 46.56 179.60 132.40
Intel R with sequential MKL 12.19 2.24 9.13 24.58 31.32
Intel R with parallel MKL (2 cores) 7.21 1.60 5.43 14.66 23.54
Intel R with parallel MKL (4 cores) 3.24 1.17 3.34 7.87 20.63
Intel R with parallel MKL (8 cores) 1.71 0.38 1.99 5.33 15.82
Intel R with parallel MKL (16 cores) 0.96 0.28 1.60 4.05 13.65

Another way of viewing these results is to see the speed up compared to the standard install of R. Even on a single CPU core, the Intel builds are between 4 and 11 times faster than the standard builds. Making use of 16 cores takes this up to 141 times faster in the case of Matrix-Matrix Multiplication!

Speed up compared to standard R

MatMul Chol SVD PCA LDA
Standard R 1 1 1 1 1
Intel R with sequential MKL 11 9 5 7 4
Intel R with parallel MKL (2 cores) 19 13 9 12 6
Intel R with parallel MKL (4 cores) 42 18 14 23 6
Intel R with parallel MKL (8 cores) 79 55 23 34 8
Intel R with parallel MKL (16 cores) 141 75 29 44 10

Parallel environment

The type of parallelisation in use here is OpenMP. As such, you need to use Iceberg's openmp environment. That is, if you want 8 cores (say), add the following to your submission script

#$ -pe openmp 8
export OMP_NUM_THREADS=8

Using OpenMP limits the number of cores you can use per job to the number available on a single node. At the time of writing, this is 16.

How many cores: Finding the sweet spot

Note that everything is fastest when using 16 cores! As such, it may be tempting to always use 16 cores for your jobs. This will almost always be a mistake. It may be that the aspect of your code that's accelerated by this build doesn't account for much of the runtime of your problem. As such, those 16 cores will sit idle most of the time -- wasting resources.

You'll also spend a lot longer waiting in the queue for 16 cores than you will for 2 cores which may swap any speed gains.

You should always perform scaling experiments before deciding how many cores to use for your jobs. Consider the Linear Discriminant Analysis problem, for example. Using just one core, Intel build gives us a 4 times speed-up compared to the standard build. Moving to 8 cores only makes it twice as fast again. As such, if you had lots of these jobs to do, your throughput would be higher running lots of single core jobs compared to lots of 8 core jobs.

If matrix-matrix multiply dominates your runtime, on the other hand, it may well be worth using 16 cores.

Using this version of R for your own work

As a user, there are a few things you need to be aware of with the Intel builds of R so I've created a separate documentation page for them. This is currently at http://docs.hpc.shef.ac.uk/en/latest/iceberg/software/apps/intel_r.html

My recommendation for using these builds is to work through the following procedure

  • Ensure that your code runs with Iceberg's standard version of R and produce a test result.
  • In the first instance, switch to the sequential version of the Intel R build. In the best case, this will just require changing the module. You may also need to install some of your packages since the Intel build has a separate packages directory to the standard build.
  • If you see speed-up and the results are consistent with your test result, try the parallel version. Initially start with 2 cores and move upwards to find the sweet spot.

The University of Sheffield named an NVIDIA GPU Education Center

Sheffield NVIDIA Education Centre

This week I am very pleased to announce that the University of Sheffield has been awarded the status of an NVIDIA CUDA Education Centre.

The faculty of Engineering has featured this in its latest faculty newsletter and the Department of Computer Science has published more details in a news feature.

But what does this mean to the RSE community at Sheffield and beyond?

The recognition of being an NVIDIA education centre is a reflection of the teaching that is provided by The University of Sheffield on the subject of GPU computing. In case you are unaware of what teaching there is, I have a 4th year and Masters teaching module COM4521/COM6521 which ran for the first time in the 2015/2016 Spring Semester. This course will be run annually and is open to research staff as well as taught students. Last time there was roughly a 50:50 mix including senior research staff and PhD students. It is much more involved that the one or two day courses which typically give only an introduction to GPU programming. If you are a researcher looking to exploit GPU performance in your research then this course is an opportunity to learn some new skills.

In the future this course will be made freely available so even researchers outside of The University of Sheffield will be able to go through the notes and worked examples (lab sheets).

Some of the other benefits of being an NVIDIA Eduction (and also an NVIDIA Research) centre are;

  • Access to NVIDIA GPU hardware and software (via Iceberg and in the Diamond labs)
  • Significant discount on Tesla hardware purchases
  • Access to NVIDIA parallel programming experts and resources
  • Access to educational webinars and an array of teaching materials
  • Free in the cloud GPU programming training at nvidia.qwiklab.com
  • Support in the form of letters of support (with contributions in kind) for research proposals with emphasis on GPU computing or deep learning
  • Joint promotion, public relations, and press activities with NVIDIA

Other Training Opportunities

Through RSE Sheffield and GPUComputing@Sheffield shorter courses for GPU computing are also available. I will be announcing dates for 1-2 day CUDA courses shortly and am working with CICS in developing new Python CUDA material.

For those that missed the sign-up, we are also running a two day deep learning with GPUs course in July. The places for this were in high demand and filled up within a day. This course will be repeated in due time and material from the course will be made available off-line.

Other GPU announcements will be made on both this RSE blog and on the GPUComputing@Sheffield mailing list. Expect some exciting new hardware and software once the Iceberg upgrade is complete (shortly).

Paul