Array-oriented programming offers a unique blend of programmer productivity and high-performance parallel execution. As an abstraction, it directly mirrors high-level mathematical constructions commonly used in many fields from natural sciences through engineering to financial modelling. As a language feature, it exposes regular control flow, exhibits structured data dependencies, and lends itself to many types of program analysis. Furthermore, many modern computer architectures, particularly highly parallel architectures such as GPUs and FPGAs, lend themselves to efficiently executing array operations.
The ARRAY workshop is intended to bring together researchers from many different communities, including language designers, library developers, compiler researchers, and practitioners, who are using or working on numeric, array-centric aspects of programming languages, libraries and methodologies from all domains: imperative or declarative; object-oriented or functional; interpreted or compiled; strongly typed, weakly typed, or untyped.
Submissions are due on 8th April 2019, and the workshop takes place on Saturday 22nd June 2019, colocated with PLDI in Phoenix, Arizona. For more information about how to submit, see the call for papers.
The ARRAY series of workshops explores:
- formal semantics and design issues of array-oriented languages and libraries;
- productivity and performance in compute-intensive application areas of array programming;
- systematic notation for array programming, including axis- and index-based approaches;
- intermediate languages, virtual machines, and program-transformation techniques for array programs;
- representation of and automated reasoning about mathematical structure, such as static and dynamic sparsity, low-rank patterns, and hierarchies of these, with connections to applications such as graph processing, HPC, tensor computation and deep learning;
- interfaces between array- and non-array code, including approaches for embedding array programs in general-purpose programming languages; and
- efficient mapping of array programs, through compilers, libraries, and code generators, onto execution platforms, targeting multi-cores, SIMD devices, GPUs, distributed systems, and FPGA hardware, by fully automatic and user-assisted means.
Array programming is at home in many communities, including language design, library development, optimization, scientific computing, and across many existing language communities. ARRAY is intended as a forum where these communities can exchange ideas on the construction of computational tools for manipulating arrays.
The invited talk will be given by Peter J. Braam:
Array Processing on Steroids for the SKA Radio-Telescope
Abstract: The Square Kilometre Array (SKA) radio telescope will be a massive scientific instrument entering service in the late 2020’s. The conversion of its antenna signals to images and the detection of transient phenomena is a massive computational undertaking, requiring 200PB/sec of memory bandwidth, all dedicated to array processing. In this talk we will give an overview of the data processing in the telescope and the process that has been followed to design suitable algorithms and systems. We will highlight parts of the challenge that have interesting relationships to computer science, and then transition to review recent technological developments such as memory, machine learning accelerators, and new floating point formats that may prove helpful.
Bio: Peter Braam is a scientist and entrepreneur focused on problems in large scale computing. Originally trained as a mathematician, he has worked at several academic institutions including Oxford, CMU and Cambridge. One of his startup companies developed the widely used Lustre file system. During the last few years he has focused on computing for the SKA telescope and on research in data intensive computing.
Sat 22 Jun Times are displayed in time zone: Tijuana, Baja California change
09:00 - 10:00
|Array Processing on Steroids for the SKA Radio-Telescope|
S: Peter Braam
10:00 - 11:00
|Convolutional Neural Networks in APL|
|Toward Generalized Tensor Algebra for ab initio Quantum Chemistry Methods|
11:30 - 12:30
|Finite Difference Methods Fengshui: Alignment through a Mathematics of Arrays|
|Linear Algebraic Depth-First Search|
14:00 - 15:30
|TeIL: a type-safe imperative Tensor Intermediate Language|
|Records with Rank Polymorphism|
|Data-Parallel Flattening by Expansion|
16:00 - 17:30
|ALPyNA: Acceleration of Loops in Python for Novel Architectures|
|Code Generation in Linnea (extended abstract)|
|High-Level Synthesis of Functional Patterns with Lift|
Call for Papers
Submissions are welcome in two categories: full papers and extended abstracts. All submissions should be formatted in conformance with the ACM SIGPLAN proceedings style. Accepted submissions in either category will be presented at the workshop.
Full papers may be up to 12pp, on any topic related to the focus of the workshop. They will be thoroughly reviewed according to the usual criteria of relevance, soundness, novelty, and significance; accepted submissions will be published in the ACM Digital Library.
Extended abstracts may be up to 2pp; they may describe work in progress, tool demonstrations, and summaries of work published in full elsewhere. The focus of the extended abstract should be to explain why the proposed presentation will be of interest to the ARRAY audience. Submissions will be lightly reviewed only for relevance to the workshop, and will not published in the DL.
Whether full papers or extended abstracts, submissions must be in PDF format, printable in black and white on US Letter sized paper. Papers must adhere to the standard SIGPLAN conference format: two columns, ten-point font. A suitable document template for LaTeX is available at http://www.sigplan.org/Resources/Author/.
Papers must be submitted using EasyChair.
Authors take note: The official publication date of full papers is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the workshop. The official publication date affects the deadline for any patent filings related to published work.