High Performance Computing with Maple, Part 1

Many people who use Maple on Sheffield's High Performance Computing (HPC) cluster do so interactively. They connect to the system, start a graphical X-Windows session and use Maple in exactly the same way as they would use it on their laptop. Such usage does have some benefits: giving access to more CPU cores and memory than you'd get on even the most highly specified of laptops, for example.

Interactive usage on the HPC system also has problems. Thanks to network latency, using a Graphical User Interface over an X-Windows connection can be a painful experience. Additionally, long calculations can tie up your computer for hours or days and if anything happens to the network connection during that time, you risk losing it all!

If you spend a long time waiting for your Maple calculations to run, it's probably time to think about moving to batch processing.

Batch processing

The idea behind batch processing is that you log in to the system, send your computation to a queue and then log out and get on with your life. The HPC system will process your computation when resources become available and email you when it's done. You can then log back in, transfer the results to your laptop and continue your analysis.

So, batch processing frees up your personal computer but it can also significantly increase your throughput. With batch processing, you can submit hundreds of computations to the queue simultaneously. The system will automatically process as many of them as it can in parallel -- allowing you to make use of dozens of large computers at once.

Batch processing is powerful but it comes at a price and that price is complexity.

Converting interactive worksheets to Maple language files

You are probably used to interacting with Maple via richly formatted worksheets and documents. These have the file extension .mw or .maple. Unfortunately, it is not possible to run Maple worksheets in batch mode so it is necessary for us to convert them to maple language files instead.

A Maple Language File has the extension .mpl and is a pure text file. To convert a worksheet to a Maple Language File, open the worksheet and click on File->Export As->Maple Input

Convert to mpl

An example

Here is an example .maple worksheet and the corresponding .mpl Maple Language File, created using the conversion process detailed above. We also have a Job submission script that will be explained later.

If you look at the .mpl file in a text editor, you will see that it contains plain text versions of all the Maple input commands that were present in the original worksheet.

myseries := series(sin(x), x = 0, 10);
poly := convert(myseries, polynom);
plot(poly, x = -2*Pi .. 2*Pi, y = -3 .. 3);

This is the file that we can run on the HPC system in batch mode.

The job submission script is a set of instructions to the HPC system's scheduler. It tells the system how much memory you want to use, what program you want to run and so on. Here is its contents

# Request 4 gigabytes of real memory (mem)
# and 4 gigabytes of virtual memory (mem)
#$ -l mem=4G -l rmem=4G

#Make Maple 2015 available
module load apps/binapps/maple/2015

#Run Maple with the input file, **series_example.mpl**
maple < series_example.mpl

To run the example on the system:

  • Transfer series_example.mpl and run_maple_job.sh to a directory on the HPC system. They both need to be in the same directory.
  • Log in to the system using a command line terminal and cd to the directory containing the files.
  • Use the ls command to confirm you really are in the right directory. You should see something like this:
[ab1test@testnode02 maple_example]$ ls

run_maple_job.sh  series_example.maple  series_example.mpl
  • Submit the job to the queue with the qsub command
qsub run_maple_job.sh
  • You should see something like
[ab1test@testnode02 maple_example]$ qsub run_maple_job.sh

Your job 1734126 ("run_maple_job.sh") has been submitted

The job number will differ from the one above. It is automatically allocated by the system and uniquely identifies the job.

  • At this point, you could log off the system and do something else if you wished but, since this is such a short job, it won't be long before the results appear. A few seconds to a minute under normal conditions.

  • Run the ls command again to see the results files.

[te1st@testnode02 maple_example]$ ls

run_maple_job.sh  run_maple_job.sh.e1734126  run_maple_job.sh.o1734126  series_example.maple  series_example.mpl

There are two new files:

  • run_maple_job.sh.e1734126 - contains any error messages. Hopefully empty here
  • run_maple_job.sh.o1734126 - Contains the results of your job

The numbers at the end refer to the job number.

This completes your first batch submission using Maple.

Issues with graphs in Maple batch mode

The Maple worksheet we used in this example includes a plot command. This looks great in a Maple worksheet but looks very retro when performed in batch mode!

> plot(poly, x = -2*Pi .. 2*Pi, y = -3 .. 3);

                                      3+                                 H     
                                       +                                 H     
                                       +                                HH     
                                      2+                                H      
                                       +                                H      
                                       +                               HH      
                                       +                               H       
                                      1+    HHHHHHHHHH                 H       
           HHHHHHHH                    +  HHH         HHH             H        
          HH      HHH                  +HHH             HH           HH        
   -6    H     -4     HH   -2        H0*           2       HHH 4    HH     6   
         H             HHH         HHH +                     HHHHHHHH          
        H                 HHHHHHHHHH -1+                                       
        H                              +                                       
       HH                              +                                       
       H                               +                                       
       H                             -2+                                       
       H                               +                                       
      H                                +                                       
      H                              -3+                                       

You probably want to have something that looks a little nicer. The way to do this is to modify the Maple plot command so that it specifies an output file. For example, if we want to create a .gif file, our Maple Language File becomes

myseries := series(sin(x), x = 0, 10);
poly := convert(myseries, polynom);

plot(poly, x = -2*Pi .. 2*Pi, y = -3 .. 3);

The plotsetup command can output a number of file types. See Maple's plotsetup documentation for details.

A full batch example for you to try is available below

The results of transferring these files to the system and submitting with qsub run_maple_job_2.sh should include a file called plot.gif that looks like this

Maple plot

Future articles

In future articles, we'll be looking at how to make use of Maple's parallel computing constructs long with more advanced scheduling tricks that allow us to run 100s of jobs simultaneously.

Further reading

Fun with strace

How I solved a mystery with strace and bash

So I'm dabbling with iceberg, The University of Sheffield's HPC, and I finally get round to putting my .profile on there. And I remember that I don't like the way less clears the screen when I've finished reading a man page.

I need to set my LESS environment variable to -X. So I add that to my .profile. I do exec bash -l to emulate logging back in.

Doesn't work. Still clears screen when reading man pages. Turns out LESS isn't set. What is going wrong with my .profile?

I'm becoming a fan of strace for this sort of debugging.

Have a look at this. When I run this command:

strace -e signal= -e open bash -l -c 'echo SCRIPT GOT HERE'

I get this output (long and boring, skip and come back to the bits I refer to):

open("/etc/ld.so.cache", O_RDONLY)      = 3
open("/lib64/libtinfo.so.5", O_RDONLY)  = 3
open("/lib64/libdl.so.2", O_RDONLY)     = 3
open("/lib64/libc.so.6", O_RDONLY)      = 3
open("/dev/tty", O_RDWR|O_NONBLOCK)     = 3
open("/usr/lib/locale/locale-archive", O_RDONLY) = 3
open("/proc/meminfo", O_RDONLY|O_CLOEXEC) = 3
open("/usr/lib64/gconv/gconv-modules.cache", O_RDONLY) = 3
open("/etc/profile", O_RDONLY)          = 3
open("/etc/profile.d/", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/colorls.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/cvs.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/ge.sh", O_RDONLY)  = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/glib2.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/lang.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/less.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/modules.sh", O_RDONLY) = 3
open("/usr/share/Modules/init/bash", O_RDONLY) = 3
open("/usr/share/Modules/init/bash_completion", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/qt.sh", O_RDONLY)  = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/set-bmc-url.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/shef-login.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/udisks-bash-completion.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/vim.sh", O_RDONLY) = 3
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/usr/share/locale/locale.alias", O_RDONLY) = 3
open("/usr/share/locale/en_GB.UTF-8/LC_MESSAGES/bash.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_GB.utf8/LC_MESSAGES/bash.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en_GB/LC_MESSAGES/bash.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.UTF-8/LC_MESSAGES/bash.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en.utf8/LC_MESSAGES/bash.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/share/locale/en/LC_MESSAGES/bash.mo", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/dev/null", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 3
open("/etc/profile.d/which2.sh", O_RDONLY) = 3
open("/home/md1xdrj/.bash_profile", O_RDONLY) = 3
open("/home/md1xdrj/.bashrc", O_RDONLY) = 3
open("/etc/bashrc", O_RDONLY)           = 3
open("/etc/profile.d/modules.sh", O_RDONLY) = 3
open("/usr/share/Modules/init/bash", O_RDONLY) = 3
open("/usr/share/Modules/init/bash_completion", O_RDONLY) = 3
+++ exited with 0 +++

strace runs a command and traces the system calls (so I suppose strace is short for System Trace). I'm running the command line bash -l -c 'echo SCRIPT GOT HERE' under strace. bash -l is a login shell, so it should source my .profile.

First thing to notice about running things with strace is that you get lots of output. And that's after I've used two options to reduce the ammount of output. The -e open option to strace restricts it so that it only shows open() system calls; normally it will show all system calls, which is way more output. The -e signal= option to strace means that it won't show any signals, without it you see all signals. Most programs don't see many signals, but in this case there are a fair number of child process management signals that are not particularly interesting.

The first few files are related to C runtimes and dynamic linking (/etc/ld.so.cache, /lib64/libc.so.6, and so on). Then we get /etc/profile. Aha! bash is reading the system wide profile file. Which it turns out causes it to reading the bag of little profile scripts kept in /etc/profile.d/ (most of which are specific to the Sheffield HPC).

(I've no idea what the obsession with opening /dev/null in between every script is by the way; some crazy bash thing. whatever)

Then, eventually, near the bottom, we see bash opening /home/md1xdrj/.bash_profile. And this is the culprit. I'm like "wait, WAT!?", "I have a .bash_profile?"

It turns out that, yes, I do have a .bash_profile. I wasn't expecting that (it wasn't created by me). A quick perusal of man bash[*1] reveals that if ~/.bash_profile exists then it will be sourced and .profile will not.

So I remove my ~/.bash_profile and life is good again.


strace is a great tool and well worth exploring a little bit (fun fact: you can attach to already running processes with strace -p). So in this case I could've read the manual and inspected the file system to see what files bash would source, but strace is more direct. The manual might be out of date, misunderstood, or just plain wrong. But strace cannot lie, it shows me what system calls a tool is actually making.

The Truth.

Of course The Truth that strace provides is really just a truth. There is a whole world of complexity that strace hides from us. Most obviously, we don't get to see all the instructions that get executed in between the system calls. We probably don't want to. strace is serving up an abstraction, and that's a useful Truth to deal with.


If you liked this, then you should check out this strace fanzine. I cannot recommend it enough. It's enthusiastic, witty, fun to read, and you will learn something about strace and system calls (I did!).


  1. This is what I call "a joke". Everyone should read man bash, but it is like Joyce's Ulysses: better read the Cliffs Notes[*2].

  2. There are no Cliffs Notes for bash.

We're hiring - RSE Position on Massive Scale Complex Systems Simulation with Accelerated Computing

A new position is available as a Research Associate/Research Software Engineer in the area of complex systems modelling using emerging high performance parallel architectures.

This post can be configured in two different ways. Either as a 3 year Research Associate/Research Software Engineer only, or as a 5 year post working as a Research Associate/Research Software Engineer (60%) and a Research Software Consultant (40%). Candidate’s preference for either option will be discussed at the interview stage.

More details on The University of Sheffield job site

9 steps for quality research software

I attended the Software Sustainability Institute's Collaborations Workshop last month. This annual workshop is one of the primary events in the Research Software Engineering calendar and I highly recommend going to one if you are involved in the development of research software in any way.

One of the things I worked on was a collaborative blog post called 9 steps for quality research software. I also wrote an article about some of the work I do called The accident and emergency of Research Software Engineering.

Sheffield RSE team collaborates with lecturers to teach computation


Last year, I worked with Dr Marta Milo of the Department of Biomedical Science to develop a new course that taught the basics of Bioinformatics to biology undergraduates. Marta took care of the subject matter while I took care of getting it all to work in the Jupyter Notebook in Sage Math Cloud. I also gave crash courses in Jupyter, git and SageMathCloud to support staff.

The course was a great success and demonstrated what can be achieved when academics partner with Research Software Engineers to deliver top quality computational teaching. My favourite comment from student feedback was The hardest thing ever, stressful, frustrating but very rewarding. -- welcome to my life!

This success has led to me being invited to departmental teaching away days for subject areas that include a lot of computation in their syllabus. The most recent of these was with the Department of chemical and biological engineering. I only had ten minutes but managed to include an introduction to the Sheffield RSE group, a quick demonstration of the Jupyter notebook, a discussion of the benefits of using SageMathCloud instead of the local managed desktop and the possibilities offered by this combination of technologies.

Sometimes, it's useful to be able to talk quickly!

Other discussions in the session I was involved with included a fantastic overview of flipped classroom teaching by Siddharth Patwardhan and some of the upcoming challenges and opportunities in the higher education teaching sector by Wyn Morgan, Sheffield's Pro-Vice-Chancellor for Learning and Teaching.

Channeling my inner Ferris Beuller, The world of research software moves pretty fast. If you don't stop and look around for a while, you could miss it. It was a pleasure to show off some of my favourite technology to the chemical and biological engineers and I look forward to working with them all in the future.

Windows 10 to support Linux binaries

The big news from Microsoft is that, from this summer, Windows 10 will support user-mode programs from the popular Ubuntu Linux distribution.

By "user-mode" we mean non-kernel things or, in other words, anything you can type into a Bash shell command window. This is complete binary compatibility: you can, for example, apt-get your favourite Linux tool or just copy it over from an Ubuntu Linux system and it will run (assuming you have any libraries it needs).

The underlying technology is a new Windows' service that dynamically maps Linux system calls to Windows ones, whilst maintaining the Linux semantics.

This is a big step for several reasons; here are just some:

  • researchers who currently dual boot their laptops/desktops may not need to
  • researchers who run either Linux or Windows in a virtual machine may not need to
  • staff and students who buy expensive MacBooks mainly for its Unix sub-system could buy a cheaper Windows 10 based laptop
  • as Windows 10 will be binary compatible with Linux it could be said to have an advantage over Apple Macs which use the BSD version of Unix (as BSD has numerous, small but sometimes annoying, differences to the tools found on Linux)
  • it makes the path from developing on a laptop to a high-performance computing cluster much more straightforward (i.e. basically the same Linux toolset all the way)
  • new researchers attending software/data carpentry courses - to learn the basics of good software engineering - will no longer have to work in a system alien to their day-to-day computing environment

I'm sure you can think of others.

The devil is always in the details, but from reports to-date this is good news for researchers.

Open Astronomy / Software Carpentry Workshop

Last week (11-15th January 2016) saw the first Open Astronomy workshop held at The University of Sheffield for the UK astronomy and solar physics communities. The first two days of the workshop consisted of the core syllabus of Software Carpentry, covering git, bash and an introduction to programming with Python. The last three days provided an introduction to carrying out research in astronomy using Python. The attendees were mostly PhD students in Astrophysics from the University of Sheffield, however there were also representation of other fields (mathematics, medicine, ecology), from other universities (St. Andrews, Reading) and at different stages in their careers (post-docs).

David teaching bash

The workshop was taught by four different instructors (Sam, Drew, Stuart and David) with the help of Tom (an Astropy developer) on Thursday. We all brought different expertise and shared the teaching out to keep the workshop lively over a gruelling five days none of us taught more than three hours in one go, and we alternated days between teaching and assisting. We used the red-green post-it technique suggested by software carpentry as a way to know how people were getting on during the sessions, also, for every session the learners were asked to use these post-it notes to give us feedback on the session (green for something good you've learnt, red for something that can be improved). This not just helped us for the next workshop, but it also helped to the next instructor that day!

As usual there were some software setup issues at the beginning of the week, however we were very close! Only one person from the whole class had trouble with the Jupyter notebook - it simply was not able to execute any command within. We didn't manage to fix the problem, but probably it was the oldest laptop (running Windows 7) in the class. Beside that case, we come across a couple of other problems with other windows machines, in one of them git log was blocking the screen and the other could not open the text editor (Notepad++ in this case) when executing commit (or merge with the default message).

Each day we updated the official repository with lesson templates in Jupyter notebook format, where an outline of the class was available, and the code cells were empty to be filled in while following the lecture. Once we completed a session, the notes can be browsed here. In this way everyone had to fork our repository on github, then pull at the start of every session from upstream and push at the end to their origin. This cemented the work at the start of the week on git+GitHub, while making sure everyone had a backup of all the work they had completed during the week and learning the usual git workflow of contributing to a larger project on GitHub. Thanks to the visualisations on GitHub we can see how all these forks evolved, and see if the participants keep using GitHub!!

During one of the sessions we discovered a bug in the latest release of SunPy. We used the opportunity to demonstrate what to do in these cases: fill an issue on GitHub and provide the information needed so the developers can replicate such error.

We will be looking to repeat this workshop later in the year, probably before term starts in late September. In the meantime, feel free to use our material and contact us if you want more information.