PLDI 2021 | 逸翎清晗🌈

Replication Package for Article: Incremental Whole-Program Analysis in Datalog with Lattices

作者: Szab'{o
关键词: Datalog, Incremental Computing, Static Analysis

Abstract

This is an Ubuntu-based virtual machine with the IncA program analysis framework and benchmark Java programs already set up on it. The artifact can be used to execute the incremental static analyses described in the paper “Incremental Whole-Program Analysis in Datalog with Lattices” and to reproduce the presented measurement results.

DOI: 10.1145/3453483.3454026


Mechanized Proof and Model Checker for Article: Revamping Hardware Persistency Models

作者: Cho, Kyeongmin and Lee, Sung-Hwan and Raad, Azalea and Kang, Jeehoon
关键词: Armv8, mechanized proof, model checker, non-volatile random-access memory, NVRAM, persistency semantics, persistent memory, x86

Abstract

Revamping Hardware Persistency Models: View-Based and Axiomatic Persistency Models for Intel-x86 and Armv8

This is the artifact for the following paper:

Kyeongmin Cho, Sung-Hwan Lee, Azalea Raad, and Jeehoon Kang. Revamping Hardware Persistency Models: View-Based and Axiomatic Persistency Models for Intel-x86 and Armv8. PLDI 2021.

Contributions (paper §1)

  • We discuss the shortcomings of the existing persistency models of Intel-x86/Armv8 and present an intuitive account of our solution as view-based models (§2).
  • We develop x86_view, a new view-based model for Intelx86 concurrency (§3).
  • We develop Px86_view (§3.5) and PArmv8_view (§6.2), respectively extending the x86_view and Armv8_view models to account for persistent memory.
  • We present Px86_axiom (§4) and PArmv8_axiom (§6.3), our axiomatic models of Intel-x86 and Armv8 persistency that simplify and repair the state-of-the-art models of the respective architectures. We prove that our axiomatic models are equivalent to the authoritative semantics reviewed by Intel and Arm engineers, modulo our proposed fixes (§4.4 and §6.3). Our proposed fix in PArmv8_axiom has been reviewed by Arm engineers.
  • We prove that Px86_view and PArmv8_view are equivalent to Px86_axiom and PArmv8_axiom, respectively. The equivalence proof is mechanized in Coq (§5 and §6.4).
  • We develop a model checker for persistency and use it to verify several representative examples under PArmv8view (§7). We conclude with related and future work (§8).

Artifacts

  • Coq formalization (§2-6) of the hardware persistency models (Px86-{view, axiom

    DOI: 10.1145/3453483.3454027


    Repairing serializability bugs in distributed database programs via automated schema refactoring

    作者: Rahmani, Kia and Nagar, Kartik and Delaware, Benjamin and Jagannathan, Suresh
    关键词: Weak Consistency, Schema Refactoring, Program Repair

    Abstract

    Serializability is a well-understood concurrency control mechanism that eases reasoning about highly-concurrent database programs. Unfortunately, enforcing serializability has a high performance cost, especially on geographically distributed database clusters. Consequently, many databases allow programmers to choose when a transaction must be executed under serializability, with the expectation that transactions would only be so marked when necessary to avoid serious concurrency bugs. However, this is a significant burden to impose on developers, requiring them to (a) reason about subtle concurrent interactions among potentially interfering transactions, (b) determine when such interactions would violate desired invariants, and © then identify the minimum number of transactions whose executions should be serialized to prevent these violations. To mitigate this burden, this paper presents a sound fully-automated schema refactoring procedure that refactors a program’s data layout – rather than its concurrency control logic – to eliminate statically identified concurrency bugs, allowing more transactions to be safely executed under weaker and more performant database guarantees. Experimental results over a range of realistic database benchmarks indicate that our approach is highly effective in eliminating concurrency bugs, with safe refactored programs showing an average of 120% higher throughput and 45% lower latency compared to a serialized baseline.

    DOI: 10.1145/3453483.3454028


    Artifact for PLDI 2021 Paper Gleipnir: Toward Practical Error Analysis for Quantum Programs

    作者: Tao, Runzhou and Shi, Yunong and Yao, Jianan and Hui, John and Chong, Frederic T. and Gu, Ronghui
    关键词: approximate computing, error analysis, Quantum programming

    Abstract

    Artifact for PLDI 2021 Paper Gleipnir: Toward Practical Error Analysis for Quantum Programs The artifact contains the docker image file needed to reproduce the results presented in the paper.

    DOI: 10.1145/3453483.3454029


    Alive2: bounded translation validation for LLVM

    作者: Lopes, Nuno P. and Lee, Juneyoung and Hur, Chung-Kil and Liu, Zhengyang and Regehr, John
    关键词: Translation Validation, IR Semantics, Compilers, Automatic Software Verification

    Abstract

    We designed, implemented, and deployed Alive2: a bounded translation validation tool for the LLVM compiler’s intermediate representation (IR). It limits resource consumption by, for example, unrolling loops up to some bound, which means there are circumstances in which it misses bugs. Alive2 is designed to avoid false alarms, is fully automatic through the use of an SMT solver, and requires no changes to LLVM. By running Alive2 over LLVM’s unit test suite, we discovered and reported 47 new bugs, 28 of which have been fixed already. Moreover, our work has led to eight patches to the LLVM Language Reference—the definitive description of the semantics of its IR—and we have participated in numerous discussions with the goal of clarifying ambiguities and fixing errors in these semantics. Alive2 is open source and we also made it available on the web, where it has active users from the LLVM community.

    DOI: 10.1145/3453483.3454030


    Coq Development for “Transfinite Iris: Resolving an Existential Dilemma of Step-Indexed Separation Logic”

    作者: Spies, Simon and G"{a
    关键词: Coq, Iris, mechanized proofs, separation logic, transfinite step-indexing

    Abstract

    This is the artifact for the paper “Transfinite Iris: Resolving an Existential Dilemma of Step-Indexed Separation Logic”. It contains the Coq mechanization of Transfinite Iris, in particular its soundness proof, program logics, and the examples presented in the paper. The artifact contains the Transfinite Iris development both in a VM image with pre-built sources and as a .zip source archive.

    DOI: 10.1145/3453483.3454031


    Perceus: garbage free reference counting with reuse

    作者: Reinking, Alex and Xie, Ningning and de Moura, Leonardo and Leijen, Daan
    关键词: Reference Counting, Handlers, Algebraic Effects

    Abstract

    We introduce Perceus, an algorithm for precise reference counting with reuse and specialization. Starting from a functional core language with explicit control-flow, Perceus emits precise reference counting instructions such that (cycle-free) programs are garbage free, where only live references are retained. This enables further optimizations, like reuse analysis that allows for guaranteed in-place updates at runtime. This in turn enables a novel programming paradigm that we call functional but in-place (FBIP). Much like tail-call optimization enables writing loops with regular function calls, reuse analysis enables writing in-place mutating algorithms in a purely functional way. We give a novel formalization of reference counting in a linear resource calculus, and prove that Perceus is sound and garbage free. We show evidence that Perceus, as implemented in Koka, has good performance and is competitive with other state-of-the-art memory collectors.

    DOI: 10.1145/3453483.3454032


    PUMPKIN Pi

    作者: Ringer, Talia and Porter, RanDair and Yazdani, Nathaniel and Leo, John and Grossman, Dan
    关键词: Coq, interactive theorem provers, proof assistants, proof engineering, proof evolution, proof repair

    Abstract

    This is the artifact for the PLDI 2021 paper “Proof Repair Across Type Equivalences.” The anonymized version has been vetted by AEC as functional and reusable. A deanonymized version corresponding to the links in the paper has been uploaded as a second version (version “deanonymized”).

    DOI: 10.1145/3453483.3454033


    Compiler-Assisted Object Inlining with Value Fields

    作者: Bruno, Rodrigo and Jovanovic, Vojin and Wimmer, Christian and Alonso, Gustavo
    关键词: Compiler Optimization, Language Implementation, Memory Management, Object Oriented, Programming Runtime Systems

    Abstract

    Object Oriented Programming has flourished in many areas ranging from web-oriented microservices, data processing, to databases. However, while representing domain entities as objects is appealing to developers, it leads to high data fragmentation as data is loaded into applications as large collections of data objects, resulting in high memory footprint and poor locality.

    To minimize memory footprint and increase memory locality, embedding the payload of an object into another object (object inlining) has been considered before but existing techniques present severe limitations that prevent it from becoming a widely adopted technique. We argue that object inlining is mostly useful to optimize the application data-path and that objects in the data-path have value semantics, which unlocks great potential for inlining objects. We therefore propose value fields, an abstraction which allows fields to be marked as having value semantics.

    We implement value fields for GraalVM Native Image. Object inlining is implemented as a compiler pipeline phase that mutates both object layouts and application code to access inlined fields. Experimental evaluation shows that applying value fields in real-world frameworks such as Apache Spark, Spring Boot, and Micronaut, requires minimal or even no effort at all from developers. Results show improvements in throughput of up to 3x, memory footprint reduction of up to 40\% and reduced GC pause times of up to 35\%.

    DOI: 10.1145/3453483.3454034


    Unleashing the hidden power of compiler optimization on binary code difference: an empirical study

    作者: Ren, Xiaolei and Ho, Michael and Ming, Jiang and Lei, Yu and Li, Li
    关键词: Compiler Optimization, Binary Code Difference

    Abstract

    Hunting binary code difference without source code (i.e., binary diffing) has compelling applications in software security. Due to the high variability of binary code, existing solutions have been driven towards measuring semantic similarities from syntactically different code. Since compiler optimization is the most common source contributing to binary code differences in syntax, testing the resilience against the changes caused by different compiler optimization settings has become a standard evaluation step for most binary diffing approaches. For example, 47 top-venue papers in the last 12 years compared different program versions compiled by default optimization levels (e.g., -Ox in GCC and LLVM). Although many of them claim they are immune to compiler transformations, it is yet unclear about their resistance to non-default optimization settings. Especially, we have observed that adversaries explored non-default compiler settings to amplify malware differences. This paper takes the first step to systematically studying the effectiveness of compiler optimization on binary code differences. We tailor search-based iterative compilation for the auto-tuning of binary code differences. We develop BinTuner to search near-optimal optimization sequences that can maximize the amount of binary code differences. We run BinTuner with GCC 10.2 and LLVM 11.0 on SPEC benchmarks (CPU2006 & CPU2017), Coreutils, and OpenSSL. Our experiments show that at the cost of 279 to 1,881 compilation iterations, BinTuner can find custom optimization sequences that are substantially better than the general -Ox settings. BinTuner’s outputs seriously undermine prominent binary diffing tools’ comparisons. In addition, the detection rate of the IoT malware variants tuned by BinTuner falls by more than 50%. Our findings paint a cautionary tale for security analysts that attackers have a new way to mutate malware code cost-effectively, and the research community needs to step back to reassess optimization-resistance evaluations.

    DOI: 10.1145/3453483.3454035


    Artifact and Appendix of “RefinedC: Automating the Foundational Verification of C Code with Refined Ownership Types”

    作者: Sammler, Michael and Lepigre, Rodolphe and Krebbers, Robbert and Memarian, Kayvan and Dreyer, Derek and Garg, Deepak
    关键词: C programming language, Coq, Iris, ownership types, proof automation, refinement types, separation logic

    Abstract

    This is the artifact for the PLDI’21 paper “RefinedC: Automating the Foundational Verification of C Code with Refined Ownership Types”. It contains the RefinedC tool including its Coq development and the appendix for the paper.

    DOI: 10.1145/3453483.3454036


    Replication Package for Artifact: “Wire Sorts: A Language Abstraction for Safe Hardware Composition”

    作者: Christensen, Michael and Sherwood, Timothy and Balkind, Jonathan and Hardekopf, Ben
    关键词: combinational cycle detection, composition, hardware description languages, modules

    Abstract

    This artifact contains the code for reproducing the results in the paper “Wire Sorts: A Language Abstraction for Safe Hardware Composition.” Its purpose is to demonstrate how our tool can analyze and annotate hardware modules in order to determine their input and output wire sorts (and check these sorts against any user ascriptions), as well as use these sorts to improve intermodular connection checks.

    DOI: 10.1145/3453483.3454037


    DeepCuts: a deep learning optimization framework for versatile GPU workloads

    作者: Jung, Wookeun and Dao, Thanh Tuan and Lee, Jaejin
    关键词: GPU, Deep Learning, Code Generation

    Abstract

    Widely used Deep Learning (DL) frameworks, such as TensorFlow, PyTorch, and MXNet, heavily rely on the NVIDIA cuDNN for performance. However, using cuDNN does not always give the best performance. One reason is that it is hard to handle every case of versatile DNN models and GPU architectures with a library that has a fixed implementation. Another reason is that cuDNN lacks kernel fusion functionality that gives a lot of chances to improve performance. In this paper, we propose a DL optimization framework for versatile GPU workloads, called DeepCuts. It considers both kernel implementation parameters and GPU architectures. It analyzes the DL workload, groups multiple DL operations into a single GPU kernel, and generates optimized GPU kernels considering kernel implementation parameters and GPU architecture parameters. The evaluation result with various DL workloads for inference and training indicates that DeepCuts outperforms cuDNN/cuBLAS-based implementations and the state-of-the-art DL optimization frameworks, such as TVM, TensorFlow XLA, and TensorRT.

    DOI: 10.1145/3453483.3454038


    Replication Package for Article: Retrofitting Effect Handlers onto OCaml

    作者: Sivaramakrishnan, KC and Dolan, Stephen and White, Leo and Kelly, Tom and Jaffer, Sadiq and Madhavapeddy, Anil
    关键词: Concurrency, Effect handlers, Generators, OCaml, Web server

    Abstract

    The artifact contains all the materials needed to reproduce and extend the results of the work. It includes the Multicore OCaml compiler, package dependencies, benchmarks, and the scripts to run and produce the results from the paper.

    DOI: 10.1145/3453483.3454039


    Replication Package for Article: Unqomp: Synthesizing Uncomputation in Quantum Circuits

    作者: Paradis, Anouk and Bichsel, Benjamin and Steffen, Samuel and Vechev, Martin
    关键词: Quantum Circuits, Synthesis, Uncomputation

    Abstract

    This is a snapshot of Unqomp, providing the artifact for the PLDI’21 paper “Unqomp: Synthesizing Uncomputation in Quantum Circuits”. For the latest version of Unqomp, refer to https://github.com/eth-sri/Unqomp.

    It contains the implementation of Unqomp for Qiskit, as well as all the necessary material to reproduce the evaluation of our paper.

    DOI: 10.1145/3453483.3454040


    Zooid: a DSL for Certified Multiparty Computation

    作者: Castro-Perez, David and Ferreira, Francisco and Gheri, Lorenzo and Yoshida, Nobuko
    关键词: concur- rent processes, Coq, deadlock freedom, liveness, mechanisation, multiparty session types, protocol compliance

    Abstract

    This is the implementation and Coq mechanisation of the metathory of Multiparty Session Types (MPST) as described on the paper.

    DOI: 10.1145/3453483.3454041


    Fluid: a framework for approximate concurrency via controlled dependency relaxation

    作者: Jiang, Huaipan and Zhang, Haibo and Tang, Xulong and Govindaraj, Vineetha and Sampson, Jack and Kandemir, Mahmut Taylan and Zhang, Danfeng
    关键词: Eager Execution, Approximate Computing

    Abstract

    In this work, we introduce the Fluid framework, a set of language, compiler and runtime extensions that allow for the expression of regions within which dataflow dependencies can be approximated in a disciplined manner. Our framework allows the eager execution of dependent tasks before their inputs have finalized in order to capitalize on situations where an eagerly-consumed input has a high probability of sufficiently resembling the value or structure of the final value that would have been produced in a conservative/precise execution schedule. We introduce controlled access to the early consumption of intermediate values and provide hooks for user-specified quality assurance mechanisms that can automatically enforce re-execution of eagerly-executed tasks if their output values do not meet heuristic expectations. Our experimental analysis indicates that the fluidized versions of the applications bring 22.2% average execution time improvements, over their original counterparts, under the default values of our fluidization parameters. The Fluid approach is largely orthogonal to approaches that aim to reduce the task effort itself and we show that utilizing the Fluid framework can yield benefits for both originally precise and originally approximate versions of computation.

    DOI: 10.1145/3453483.3454042


    Developer and user-transparent compiler optimization for interactive applications

    作者: Mpeis, Paschalis and Petoumenos, Pavlos and Hazelwood, Kim and Leather, Hugh
    关键词: replay, iterative compilation, interactive, capture

    Abstract

    Traditional offline optimization frameworks rely on representative hardware, software, and inputs to compare different optimization decisions on. With application-specific optimization for mobile systems though, the idea of a representative test bench is unrealistic while creating offline inputs is non-trivial. Online approaches partially overcome these problems but they might expose users to suboptimal or even erroneously optimized code. As a result, our mobile code is poorly optimized and this results in wasted performance, wasted energy, and user frustration. In this paper, we introduce a novel compiler optimization approach designed for mobile applications. It requires no developer effort, it tunes applications for the user’s device and usage patterns, and has no negative impact on the user experience. It is based on a lightweight capture and replay mechanism. In its online stage, it captures the state accessed by any targeted code region. By re-purposing existing OS capabilities, it keeps the overhead low. In its offline stage, it replays the code region but under different optimization decisions to enable sound comparisons of different optimizations under realistic conditions. Coupled with a search heuristic for the compiler optimization space, it allows us to discover optimization decisions that improve performance without testing these decisions directly on the user. We implemented a prototype system in Android based on LLVM combined with a genetic search engine. We evaluated it on both benchmarks and real Android applications. Online captures are infrequent and each one introduces an overhead of less than 15ms on average. For this negligible effect on user experience, we achieve speedups of 44% on average over the Android compiler and 35% over LLVM -O3.

    DOI: 10.1145/3453483.3454043


    Artifact for Article: Demanded Abstract Interpretation

    作者: Stein, Benno and Chang, Bor-Yuh Evan and Sridharan, Manu
    关键词: Abstract interpretation, demand-driven analysis, incremental analysis

    Abstract

    The artifact is a docker image containing source code and binaries needed to reproduce the paper’s experiments.

    DOI: 10.1145/3453483.3454044


    Learning to find naming issues with big code and small supervision

    作者: He, Jingxuan and Lee, Cheng-Chun and Raychev, Veselin and Vechev, Martin
    关键词: Static analysis, Name-based program analysis, Machine learning, Bug detection, Anomaly detection

    Abstract

    We introduce a new approach for finding and fixing naming issues in source code. The method is based on a careful combination of unsupervised and supervised procedures: (i) unsupervised mining of patterns from Big Code that express common naming idioms. Program fragments violating such idioms indicates likely naming issues, and (ii) supervised learning of a classifier on a small labeled dataset which filters potential false positives from the violations. We implemented our method in a system called Namer and evaluated it on a large number of Python and Java programs. We demonstrate that Namer is effective in finding naming mistakes in real world repositories with high precision (~70%). Perhaps surprisingly, we also show that existing deep learning methods are not practically effective and achieve low precision in finding naming issues (up to ~16%).

    DOI: 10.1145/3453483.3454045


    DIY assistant: a multi-modal end-user programmable virtual assistant

    作者: Fischer, Michael H. and Campagna, Giovanni and Choi, Euirim and Lam, Monica S.
    关键词: web automation, voice user interfaces, virtual assis- tants, programming by demon- stration, end-user programming

    Abstract

    While Alexa can perform over 100,000 skills, its capability covers only a fraction of what is possible on the web. Individuals need and want to automate a long tail of web-based tasks which often involve visiting different websites and require programming concepts such as function composition, conditional, and iterative evaluation. This paper presents DIYA (Do-It-Yourself Assistant), a new system that empowers users to create personalized web-based virtual assistant skills that require the full generality of composable control constructs, without having to learn a formal programming language. With DIYA, the user demonstrates their task of interest in the browser and issues a few simple voice commands, such as naming the skills and adding conditions on the action. DIYA turns these multi-modal specifications into voice-invocable skills written in the ThingTalk 2.0 programming language we designed for this purpose. DIYA is a prototype that works in the Chrome browser. Our user studies show that 81% of the proposed routines can be expressed using DIYA. DIYA is easy to learn, and 80% of users surveyed find DIYA useful.

    DOI: 10.1145/3453483.3454046


    Web question answering with neurosymbolic program synthesis

    作者: Chen, Qiaochu and Lamoreaux, Aaron and Wang, Xinyu and Durrett, Greg and Bastani, Osbert and Dillig, Isil
    关键词: Web Information Extraction, Programming by Example, Program Synthesis

    Abstract

    In this paper, we propose a new technique based on program synthesis for extracting information from webpages. Given a natural language query and a few labeled webpages, our method synthesizes a program that can be used to extract similar types of information from other unlabeled webpages. To handle websites with diverse structure, our approach employs a neurosymbolic DSL that incorporates both neural NLP models as well as standard language constructs for tree navigation and string manipulation. We also propose an optimal synthesis algorithm that generates all DSL programs that achieve optimal F1 score on the training examples. Our synthesis technique is compositional, prunes the search space by exploiting a monotonicity property of the DSL, and uses transductive learning to select programs with good generalization power. We have implemented these ideas in a new tool called WebQA and evaluate it on 25 different tasks across multiple domains. Our experiments show that WebQA significantly outperforms existing tools such as state-of-the-art question answering models and wrapper induction systems.

    DOI: 10.1145/3453483.3454047


    Replication Package for Article: RbSyn: Type- and Effect-Guided Program Synthesis

    作者: Guria, Sankha Narayan and Foster, Jeffrey S. and Van Horn, David
    关键词: program synthesis, Ruby, type and effect systems

    Abstract

    The artifact is a Docker image that contains all of the source code, benchmarks, and experiment harnesses used in the development of the paper (set-up and ready to run). The README contains instructions to reproduce results from the paper, as well as pointers for how to extend or modify the tool and benchmarks.

    DOI: 10.1145/3453483.3454048


    High Performance Correctly Rounded Math Libraries for 32-bit Floating Point Representations

    作者: Lim, Jay P. and Nagarakatte, Santosh
    关键词: correctly rounded results, elementary functions, floating point, posits

    Abstract

    RLIBM-32 is both a math library that provides correctly rounded result for all inputs and tools used to generate the correct polynomials. The techniques behind the tools will be appearing at PLDI 2021. Currently, RLIBM-32 supports a number of elementary functions for float and posit32 representations.

    List of float functions supported by RLIBM-32

    1. log(x), log2(x), log10(x)

    2. exp(x), exp2(x), exp10(x)

    3. sinh(x), cosh(x)

    4. sinpi(x), cospi(x)

    List of posit32 functions supported by RLIBM-32

    1. log(x), log2(x), log10(x)

    2. exp(x), exp2(x), exp10(x)

    3. sinh(x), cosh(x)

    DOI: 10.1145/3453483.3454049


    Porcupine: a synthesizing compiler for vectorized homomorphic encryption

    作者: Cowan, Meghan and Dangwal, Deeksha and Alaghi, Armin and Trippel, Caroline and Lee, Vincent T. and Reagen, Brandon
    关键词: vectorization, program synthesis, homomorphic encryption

    Abstract

    Homomorphic encryption (HE) is a privacy-preserving technique that enables computation directly on encrypted data. Despite its promise, HE has seen limited use due to performance overheads and compilation challenges. Recent work has made significant advances to address the performance overheads but automatic compilation of efficient HE kernels remains relatively unexplored. This paper presents Porcupine, an optimizing compiler that generates vectorized HE code using program synthesis. HE poses three major compilation challenges: it only supports a limited set of SIMD-like operators, it uses long-vector operands, and decryption can fail if ciphertext noise growth is not managed properly. Porcupine captures the underlying HE operator behavior so that it can automatically reason about the complex trade-offs imposed by these challenges to generate optimized, verified HE kernels. To improve synthesis time, we propose a series of optimizations including a sketch design tailored to HE to narrow the program search space. We evaluate Porcupine using a set of kernels and show speedups of up to 52% (25% geometric mean) compared to heuristic-driven hand-optimized kernels. Analysis of Porcupine’s synthesized code reveals that optimal solutions are not always intuitive, underscoring the utility of automated reasoning in this domain.

    DOI: 10.1145/3453483.3454050


    Replication Package for: Concolic Program Repair

    作者: Shariffdeen, Ridwan and Noller, Yannic and Grunske, Lars and Roychoudhury, Abhik
    关键词: patch overfitting, program repair, program synthesis, symbolic execution

    Abstract

    This is the artifact for the PLDI’2021 submission “Concolic Program Repair”. It includes the following content: * the tool CPR, which implements our concolic program repair concept, * all benchmark subjects and scripts to reproduce our evaluation, and * additional documentation to allow the re-usage of CPR, as well as helpful examples.

    DOI: 10.1145/3453483.3454051


    Artifact: Concise, Type-Safe, and Efficient Structural Diffing

    作者: Erdweg, Sebastian and Szab'{o
    关键词: incremental computing, tree diffing

    Abstract

    Implementation of the algorithm in Scala; benchmark code and data.

    DOI: 10.1145/3453483.3454052


    CoStar parser implementation, correctness proofs, and performance evaluation

    作者: Lasser, Sam and Casinghino, Chris and Fisher, Kathleen and Roux, Cody
    关键词: interactive theorem proving, parsing

    Abstract

    Artifact submitted for evaluation along with the PLDI 2021 paper "CoStar: A Verified ALL(*) Parser."

    DOI: 10.1145/3453483.3454053


    COMFORT: v1.0

    作者: Ye, Guixin and Tang, Zhanyong and Tan, Shin Hwei and Huang, Songfang and Fang, Dingyi and Sun, Xiaoyang and Bian, Lizhong and Wang, Haibo and Wang, Zheng
    关键词: Compiler Fuzzing, Conformance bugs, Deep Learning, JavaScript

    Abstract

    COMFORT is a deep-learning-based compiler fuzzer for testing JavaScript engine bugs, including conformance bugs (JS compiler implementations that violate a specification defined in the relevant ECMAScript-262 standard). The corresponding research paper, “Automated Conformance Testing for JavaScript Engines via Deep Compiler Fuzzing,” appeared in PLDI 2021.

    DOI: 10.1145/3453483.3454054


    Artifact Evaluation for “Beyond the Elementary Representations of Program Invariants over Algebraic Data Types” paper

    作者: Kostyukov, Yurii and Mordvinov, Dmitry and Fedyukovich, Grigory
    关键词: algebraic data types, finite models, first-order definability, invariant representation, invariants, tree automata

    Abstract

    An Ubuntu20.04.ova file provided is the VM snapshot with an artifact installed.

    The artifact is provided to support the result of the Evaluation section of the paper “Beyond the Elementary Representations of Program Invariants over Algebraic Data Types” submitted to PLDI 2021.

    Paper results reproduction instructions are contained in the README.txt file.

    DOI: 10.1145/3453483.3454055


    Artifact for PLDI’21 paper #156 “Fast and Precise Certification of Transformers”

    作者: Bonaert, Gregory and Dimitrov, Dimitar I. and Baader, Maximilian and Vechev, Martin
    关键词: Abstract Interpretation, Adversarial attacks, Deep Learning, Robustness Certification, Transformer Networks

    Abstract

    We present DeepT, a novel method for certifying Transformer networks based on abstract interpretation. The key idea behind DeepT is our new multi-norm Zonotope abstract domain, an extension of the classical Zonotope designed to handle L1 and L2-norm bound perturbations. This artifact contains the source code, Transformer networks, scripts and Jupyter notebooks for the DeepT verifier and can be used to reproduce the core results of the paper “Fast and Precise Certification of Transformers”, published at the PLDI’21 conference.

    DOI: 10.1145/3453483.3454056


    Static Control-Flow Analyzers for the Article: Trace-Based Control-Flow Analysis

    作者: Montagu, Beno^{\i
    关键词: CFA, control-flow analysis, lambda-calculus, static analysis, widening

    Abstract

    The artifact contains prototype implementations of the static analyzers for control-flow analysis (CFA) described in the PLDI’21 article “Trace-Based Control-Flow Analysis”, a collection of program examples, and a procedure to reproduce the experimental results presented in the article. The artifact is a TAR.XZ archive, that contains a README file, a LICENSE file and two components. The first component is the OCaml source code of the analyzers. We also provide a version of the analyzers that runs in a web browser directly. The second component is a docker container that embeds the same sources and a minimal Linux environment so that you can compile the sources and execute the analyzers. The README file describes with details the procedure to build and run the artifact using the docker container.

    DOI: 10.1145/3453483.3454057


    Replication package for the article: Compiling Stan to Generative Probabilistic Languages and Extension to Deep Probabilistic Programming

    作者: Baudart, Guillaume and Burroni, Javier and Hirzel, Martin and Mandel, Louis and Shinnar, Avraham
    关键词: Compilation, Deep probabilistic programming, NumPyro, Probabilistic programming, Pyro, Stan, Variational inference

    Abstract

    This artifact contains the Stan (+ extensions) to Pyro and NumPyro compiler presented in the paper, and the code to reproduce the evaluation.

    The artifact comprises 3 main parts corresponding to the 3 top-level directories. - stanc3: the fork of the Stanc3 compiler with two new backends targeting Pyro and NumPyro (clone of https://github.com/deepppl/stanc3) - stan-num-pyro: the Pyro and NumPyro runtime libraries to execute the program (clone of https://github.com/deepppl/stan-num-pyro) - evaluation: code and data to reproduce the evaluation section of the paper (clone of https://github.com/deepppl/evaluation)

    In addition the artifact also contains: - README.md: this file - deepstan.tar.gz: a Docker image with the compiler and runtime installed - pldi2021.pdf: the paper - coin.stan: a simple Stan program - coin_infer.py: a Python script to compile and execute coin.stan - requirements.txt: the list of Python dependencies - deepstan.docker: the docker file used to build the image.

    To summarize this artifact: - Demonstrates that it is possible to compile Stan programs to generative probabilistic programming languages by providing a modified version of the Stanc3 compiler with two new backends targeting Pyro and NumPyro. - Demonstrates the extension of Stan with deep probabilistic programming and variational inference with explicit guides. - Provides the code used in the evaluation section of the paper to answer the following research questions: - RQ1: Can we compile and run all Stan models? - RQ2: What is the impact of the compilation on accuracy? - RQ3: What is the impact of the compilation on speed? - RQ4: Are explicit variational guides useful? - RQ5: For deep probabilistic models, how does DeepStan compare to hand-written Pyro code?

    DOI: 10.1145/3453483.3454058


    Evaluted Artifact for: Filling Typed Holes with Live GUIs

    作者: Omar, Cyrus and Moon, David and Blinn, Andrew and Voysey, Ian and Collins, Nick and Chugh, Ravi
    关键词: GUIs, live programming, macros, typed holes

    Abstract

    Agda proofs and snapshot of Hazel implementation described in the paper.

    DOI: 10.1145/3453483.3454059


    Artifact for “Concurrent Deferred Reference Counting with Constant-Time Overhead”

    作者: Anderson, Daniel and Blelloch, Guy E. and Wei, Yuanhao
    关键词: automatic memory management, concurrent algorithms, memory reclamation, reference counting

    Abstract

    This artifact contains a preliminary version of our C++ library for atomic reference-counted pointers and a benchmark suite that evaluates its performance against existing reference-counted pointers and manual SMR techniques.

    DOI: 10.1145/3453483.3454060


    Software artifact for the paper “Quantum Abstract Interpretation”

    作者: Yu, Nengkun and Palsberg, Jens
    关键词: abstract interpretation., Quantum programming, scalability

    Abstract

    The artifact allows a user to reproduce the experimental results in the paper “Quantum Abstract Interpretation”.

    DOI: 10.1145/3453483.3454061


    Replication Package for Article: Central Moment Analysis for Cost Accumulators in Probabilistic Programs

    作者: Wang, Di and Hoffmann, Jan and Reps, Thomas
    关键词: central moments, cost analysis, Probabilistic programs, tail bounds

    Abstract

    This artifact provides an implementation of a static analyzer for higher (raw or central) moments of cost accumulators (e.g., running time) in probabilistic programs.

    DOI: 10.1145/3453483.3454062


    Synthesizing data structure refinements from integrity constraints

    作者: Pailoor, Shankara and Wang, Yuepeng and Wang, Xinyu and Dillig, Isil
    关键词: Programming Languages, Program Synthesis, Data structure refinement

    Abstract

    Implementations of many data structures use several correlated fields to improve their performance; however, inconsistencies between these fields can be a source of serious program errors. To address this problem, we propose a new technique for automatically refining data structures from integrity constraints. In particular, consider a data structure D with fields F and methods M, as well as a new set of auxiliary fields F′ that should be added to D. Given this input and an integrity constraint Φ relating F and F′, our method automatically generates a refinement of D that satisfies the provided integrity constraint. Our method is based on a modular instantiation of the CEGIS paradigm and uses a novel inductive synthesizer that augments top-down search with three key ideas. First, it computes necessary preconditions of partial programs to dramatically prune its search space. Second, it augments the grammar with promising new productions by leveraging the computed preconditions. Third, it guides top-down search using a probabilistic context-free grammar obtained by statically analyzing the integrity checking function and the original code base. We evaluated our method on 25 data structures from popular Java projects and show that our method can successfully refine 23 of them. We also compare our method against two state-of-the-art synthesis tools and perform an ablation study to justify our design choices. Our evaluation shows that (1) our method is successful at refining many data structure implementations in the wild, (2) it advances the state-of-the-art in synthesis, and (3) our proposed ideas are crucial for making this technique practical.

    DOI: 10.1145/3453483.3454063


    Replication Package for Article: Provable Repair of Deep Neural Networks

    作者: Sotoudeh, Matthew and Thakur, Aditya V.
    关键词: deep learning, machine learning, program repair

    Abstract

    The zip file contains our artifact as a virtual machine as well as a README with instructions. An updated version of this artifact is stored at https://github.com/95616ARG/PRDNN

    DOI: 10.1145/3453483.3454064


    Replication Package for Article: Integration Verification across Software and Hardware for a Simple Embedded System

    作者: Erbsen, Andres and Gruetter, Samuel and Choi, Joonwon and Wood, Clark and Chlipala, Adam
    关键词: Embedded Systems, Formal Verification, Hardware-Software Interface, Proof Assistants, RISC-V Instruction-Set Family

    Abstract

    This artifact includes the Coq development described in the paper and all dependencies for generating the FPGA bitstream. All this is packaged as a .vdi disk image that boots into a Linux terminal environment accessible over SSH. The artifact was evaluated by running it in VirtualBox as described in README.txt.

    DOI: 10.1145/3453483.3454065


    dZ3: Artifact for “Symbolic Boolean Derivatives for Efficiently Solving Extended Regular Expression Constraints”

    作者: Stanford, Caleb and Veanes, Margus and Bj\o{
    关键词: regex, regular expression, SMT, string, Z3

    Abstract

    This is the artifact for the paper: Symbolic Boolean Derivatives for Efficiently Solving Extended Regular Expression Constraints. This artifact is provided as a Docker container for the artifact evaluation for PLDI 2021.

    If convenient, you can also view the artifact files online on GitHub at https://github.com/cdstanford/dz3-artifact. The GitHub repository contains everything in the Docker container except the solver binaries, which are too large.

    DOI: 10.1145/3453483.3454066


    Abstraction for conflict-free replicated data types

    作者: Liang, Hongjin and Feng, Xinyu
    关键词: Replicated Data Types, Program Logic, Modular Verification, Eventual Consistency, Contextual Refinement

    Abstract

    Strong eventual consistency (SEC) has been used as a classic notion of correctness for Conflict-Free Replicated Data Types (CRDTs). However, it does not give proper abstractions of functionality, thus is not helpful for modular verification of client programs using CRDTs. We propose a new correctness formulation for CRDTs, called Abstract Converging Consistency (ACC), to specify both data consistency and functional correctness. ACC gives abstract atomic specifications (as an abstraction) to CRDT operations, and establishes consistency between the concrete execution traces and the execution using the abstract atomic operations. The abstraction allows us to verify the CRDT implementation and its client programs separately, resulting in more modular and elegant proofs than monolithic approaches for whole program verification. We give a generic proof method to verify ACC of CRDT implementations, and a rely-guarantee style program logic to verify client programs. Our Abstraction theorem shows that ACC is equivalent to contextual refinement, linking the verification of CRDT implementations and clients together to derive functional correctness of whole programs.

    DOI: 10.1145/3453483.3454067


    Artifact Evaluation for “Boosting SMT Solver Performance on Mixed-Bitwise-Arithmetic Expressions”

    作者: Xu, Dongpeng and Liu, Binbin and Feng, Weijie and Ming, Jiang and Zheng, Qilong and Li, Jing and Yu, Qiaoyan
    关键词: Mixed Boolean Arithmetic, Simplification, SMT Solvers

    Abstract

    The artifact is for evaluating the result from the paper “Boosting SMT Solver Performance on Mixed-Bitwise-Arithmetic Expressions.” Please read the README file before you run the program.

    DOI: 10.1145/3453483.3454068


    Distance-in-time versus distance-in-space

    作者: Kandemir, Mahmut Taylan and Tang, Xulong and Zhao, Hui and Ryoo, Jihyun and Karakoy, Mustafa
    关键词: Manycore architecture, Data locality, Code transformation

    Abstract

    Cache behavior is one of the major factors that influence the performance of applications. Most of the existing compiler techniques that target cache memories focus exclusively on reducing data reuse distances in time (DIT). However, current manycore systems employ distributed on-chip caches that are connected using an on-chip network. As a result, a reused data element/block needs to travel over this on-chip network, and the distance to be traveled – reuse distance in space (DIS) – can be as influential in dictating application performance as reuse DIT. This paper represents the first attempt at defining a compiler framework that accommodates both DIT and DIS. Specifically, it first classifies data reuses into four groups: G1: (low DIT, low DIS), G2: (high DIT, low DIS), G3: (low DIT, high DIS), and G4: (high DIT, high DIS). Then, observing that reuses in G1 represent the ideal case and there is nothing much to be done in computations in G4, it proposes a “reuse transfer” strategy that transfers select reuses between G2 and G3, eventually, transforming each reuse to either G1 or G4. Finally, it evaluates the proposed strategy using a set of 10 multithreaded applications. The collected results reveal that the proposed strategy reduces parallel execution times of the tested applications between 19.3% and 33.3%.

    DOI: 10.1145/3453483.3454069


    Artifact for Paper: An Efficient Interpreter for Datalog by De-specializing Relations

    作者: Hu, Xiaowen and Zhao, David and Jordan, Herbert and Scholz, Bernhard
    关键词: Datalog, interpreter, static analysis

    Abstract

    The artifact provides source code, experiments input and necessary instructions for reproducing the primary results in the paper “An Efficient Interpreter for Datalog by De-specializing Relations”.

    DOI: 10.1145/3453483.3454070


    Replication Package for Article: Adaptive Restarts for Stochastic Synthesis

    作者: Koenig, Jason R. and Padon, Oded and Aiken, Alex
    关键词: binary code, program synthesis, restart algorithms, superoptimization

    Abstract

    This artifact contains the synthesis evaluation program and Superoptimization Benchmark of synthesis problems, experimental data, code for scraping new synthesis problems, input binaries for scraping, chart generating scripts, and other support scripts.

    DOI: 10.1145/3453483.3454071


    Source Code and Case Studies for Scooter & Sidecar: A Domain-Specific Approach to Writing Secure Database Migrations

    作者: Renner, John and Sanchez-Stern, Alex and Brown, Fraser and Lerner, Sorin and Stefan, Deian
    关键词: database migration, scooter, sidecar, SMT, verification

    Abstract

    The artifact contains the full source code for scooter and sidecar as they were originally submitted to artifact evaluation. Furthermore, the artifact contains several case studies which are discussed in the paper.

    DOI: 10.1145/3453483.3454072


    Artifact: When Threads Meet Events: Efficient and Precise Static Race Detection with Origins

    作者: Liu, Bozhen and Liu, Peiming and Li, Yanze and Tsai, Chia-Che and Da Silva, Dilma and Huang, Jeff
    关键词: Data Race Detection, Origins, Pointer Analysis, Static Analysis

    Abstract

    This is the artifact of O2 from paper “When Threads Meet Events: Efficient and Precise Static Race Detection with Origins” published in PLDI’21 (https://doi.org/10.1145/3453483.3454073). O2 detects data races in large complex multithreaded and event-driven software. O2 is powered by “origins”, an abstraction that unifies threads and events by treating them as entry points of code paths attributed with data pointers. We have implemented O2 for both C/C++ and JVM applications and applied it to a wide range of open source software (e.g., DaCapo Benchmarks, HDFS, Yarn, Zookeeper, Firefox Focus, Memcached, Linux kernel).

    DOI: 10.1145/3453483.3454073


    Replication Package for Viaduct: An Extensible, Optimizing Compiler for Secure Distributed Programs

    作者: Acay, Co\c{s
    关键词: information flow, multiparty computation, zero knowledge

    Abstract

    This artifact contains code and instructions necessary to replicate experimental results in the article “Viaduct: An Extensible, Optimizing Compiler for Secure Distributed Programs.” We provide a Docker image with the Viaduct compiler and all its dependencies installed, as well as the code samples used in the evaluation. The image also includes scripts for running the experiments from the paper.

    DOI: 10.1145/3453483.3454074


    Replication package for Reticle: A Virtual Machine for Programming Modern FPGAs

    作者: Vega, Luis and McMahan, Joseph and Sampson, Adrian and Grossman, Dan and Ceze, Luis
    关键词: compilers, FPGAs

    Abstract

    Virtual machine image containing all necessary dependencies for running the evaluation in the paper

    DOI: 10.1145/3453483.3454075


    Polynomial reachability witnesses via Stellens"{a

    作者: Asadi, Ali and Chatterjee, Krishnendu and Fu, Hongfei and Goharshady, Amir Kafshdar and Mahdavi, Mohammad
    关键词: Stellens"{a

    Abstract

    We consider the fundamental problem of reachability analysis over imperative programs with real variables. Previous works that tackle reachability are either unable to handle programs consisting of general loops (e.g. symbolic execution), or lack completeness guarantees (e.g. abstract interpretation), or are not automated (e.g. incorrectness logic). In contrast, we propose a novel approach for reachability analysis that can handle general and complex loops, is complete, and can be entirely automated for a wide family of programs. Through the notion of Inductive Reachability Witnesses (IRWs), our approach extends ideas from both invariant generation and termination to reachability analysis. We first show that our IRW-based approach is sound and complete for reachability analysis of imperative programs. Then, we focus on linear and polynomial programs and develop automated methods for synthesizing linear and polynomial IRWs. In the linear case, we follow the well-known approaches using Farkas’ Lemma. Our main contribution is in the polynomial case, where we present a push-button semi-complete algorithm. We achieve this using a novel combination of classical theorems in real algebraic geometry, such as Putinar’s Positivstellensatz and Hilbert’s Strong Nullstellensatz. Finally, our experimental results show we can prove complex reachability objectives over various benchmarks that were beyond the reach of previous methods.

    DOI: 10.1145/3453483.3454076


    Replication Package for Article: Sound Probabilistic Inference via Guide Types

    作者: Wang, Di and Hoffmann, Jan and Reps, Thomas
    关键词: Bayesian inference, coroutines, Probabilistic programming, type systems

    Abstract

    This artifact provides an implementation of a probabilistic programming language that (i) features a coroutine-based paradigm for implementing generative models and custom inference guides (e.g., proposals for importance sampling), and (ii) uses a novel guide-type system to ensure that the distributions specified by a model and its guide have the same support; as a consequence, the model-guide pair is provably sound for probabilistic inference.

    DOI: 10.1145/3453483.3454077


    System Implementation and Experiments for SPPL: Probabilistic Programming with Fast Exact Symbolic Inference

    作者: Saad, Feras A. and Rinard, Martin C. and Mansinghka, Vikash K.
    关键词: probabilistic programming, symbolic execution

    Abstract

    This artifact contains an implementation of the SPPL programming language (version 2.0.0), as well as code for experimental results and tutorial figures described in the paper.

    DOI: 10.1145/3453483.3454078


    Reverse engineering for reduction parallelization via semiring polynomials

    作者: Morihata, Akimasa and Sato, Shigeyuki
    关键词: semiring, reverse engineering, reduction loop, program synthesis, parallelization

    Abstract

    Parallel reduction, which summarizes a given dataset, e.g., the total, average, and maximum, plays a crucial role in parallel programming. This paper presents a new approach, reverse engineering, to automatically discovering nontrivial parallel reductions in sequential programs. The body of the sequential reduction loop is regarded as a black box, and its input-output behaviors are sampled. If the behaviors correspond to a set of linear polynomials over a semiring, a divide-and-conquer parallel reduction is generated. Auxiliary reverse-engineering methods enable a long and nested loop body to be decomposed, which makes our parallelization scheme applicable to various types of reduction loops. This approach is not only simple and efficient but also agnostic to the details of the input program. Its potential is demonstrated through several use case scenarios. A proof-of-concept implementation successfully inferred linear polynomials for nearly all of the 74 benchmarks exhaustively collected from the literature. These characteristics and experimental results demonstrate the promise of the proposed approach, despite its inherent unsoundness.

    DOI: 10.1145/3453483.3454079


    DreamCoder software and data

    作者: Ellis, Kevin and Wong, Catherine and Nye, Maxwell and Sabl'{e
    关键词: artificial intelligence, deep learning, program synthesis

    Abstract

    Source code for DreamCoder, pretrained checkpoints, and documentation

    DOI: 10.1145/3453483.3454080


    Automatically enforcing fresh and consistent inputs in intermittent systems

    作者: Surbatovich, Milijana and Jia, Limin and Lucia, Brandon
    关键词: timeliness, intermittent computing, energy harvesting

    Abstract

    Intermittently powered energy-harvesting devices enable new applications in inaccessible environments. Program executions must be robust to unpredictable power failures, introducing new challenges in programmability and correctness. One hard problem is that input operations have implicit constraints, embedded in the behavior of continuously powered executions, on when input values can be collected and used. This paper aims to develop a formal framework for enforcing these constraints. We identify two key properties—freshness (i.e., uses of inputs must satisfy the same time constraints as in continuous executions) and temporal consistency (i.e., the collection of a set of inputs must satisfy the same time constraints as in continuous executions). We formalize these properties and show that they can be enforced using atomic regions. We develop Ocelot, an LLVM-based analysis and transformation tool targeting Rust, to enforce these properties automatically. Ocelot provides the programmer with annotations to express these constraints and infers atomic region placement in a program to satisfy them. We then formalize Ocelot’s design and show that Ocelot generates correct programs with little performance cost or code changes.

    DOI: 10.1145/3453483.3454081


    Artifact for the paper “Modular Data-Race-Freedom Guarantees in the Promising Semantics”

    作者: Cho, Minki and Lee, Sung-Hwan and Hur, Chung-Kil and Lahav, Ori
    关键词: Compiler Optimizations, Data Race Freedom, Operational Semantics, Relaxed Memory Concurrency

    Abstract

    The artifact for the paper “Modular Data-Race-Freedom Guarantees in the Promising Semantics” (PLDI 2021). It contains mechanized proofs in Coq and script code for performance evaulation.

    DOI: 10.1145/3453483.3454082


    DNNFusion: accelerating deep neural networks execution with advanced operator fusion

    作者: Niu, Wei and Guan, Jiexiong and Wang, Yanzhi and Agrawal, Gagan and Ren, Bin
    关键词: Operator Fusion, Mobile Devices, Deep Neural Network, Compiler Optimization

    Abstract

    Deep Neural Networks (DNNs) have emerged as the core enabler of many major applications on mobile devices. To achieve high accuracy, DNN models have become increasingly deep with hundreds or even thousands of operator layers, leading to high memory and computational requirements for inference. Operator fusion (or kernel/layer fusion) is key optimization in many state-of-the-art DNN execution frameworks, such as TensorFlow, TVM, and MNN, that aim to improve the efficiency of the DNN inference. However, these frameworks usually adopt fusion approaches based on certain patterns that are too restrictive to cover the diversity of operators and layer connections, especially those seen in many extremely deep models. Polyhedral-based loop fusion techniques, on the other hand, work on a low-level view of the computation without operator-level information, and can also miss potential fusion opportunities. To address this challenge, this paper proposes a novel and extensive loop fusion framework called DNNFusion. The basic idea of this work is to work at an operator view of DNNs, but expand fusion opportunities by developing a classification of both individual operators and their combinations. In addition, DNNFusion includes 1) a novel mathematical-property-based graph rewriting framework to reduce evaluation costs and facilitate subsequent operator fusion, 2) an integrated fusion plan generation that leverages the high-level analysis and accurate light-weight profiling, and 3) additional optimizations during fusion code generation. DNNFusion is extensively evaluated on 15 DNN models with varied types of tasks, model sizes, and layer counts. The evaluation results demonstrate that DNNFusion finds up to 8.8 \texttimes{

    DOI: 10.1145/3453483.3454083


    SyRust Artifact: PLDI2021 Artifact

    作者: Takashima, Yoshiki and Martins, Ruben and Jia, Limin and P\u{a
    关键词: API Testing, Rust, Security, Software Engineering, Synthesis

    Abstract

    This artifact contains software required to replicate the results of the paper “SyRust: Automatic Testing of Rust Libraries with Semantic-Aware Program Synthesis” published at PLDI 2021.

    The artifact consists of SyRust source code, configuration files to fully replicate our experiments, post-processing scripts to summarize data, and docker images for maintaining and environment for replicability.

    DOI: 10.1145/3453483.3454084


    Chianina: an evolving graph system for flow- and context-sensitive analyses of million lines of C code

    作者: Zuo, Zhiqiang and Zhang, Yiyu and Pan, Qiuhong and Lu, Shenming and Li, Yue and Wang, Linzhang and Li, Xuandong and Xu, Guoqing Harry
    关键词: static analysis, parallel computing, graph processing

    Abstract

    Sophisticated static analysis techniques often have complicated implementations, much of which provides logic for tuning and scaling rather than basic analysis functionalities. This tight coupling of basic algorithms with special treatments for scalability makes an analysis implementation hard to (1) make correct, (2) understand/work with, and (3) reuse for other clients. This paper presents Chianina, a graph system we developed for fully context- and flow-sensitive analysis of large C programs. Chianina overcomes these challenges by allowing the developer to provide only the basic algorithm of an analysis and pushing the tuning/scaling work to the underlying system. Key to the success of Chianina is (1) an evolving graph formulation of flow sensitivity and (2) the leverage of out-of-core, disk support to deal with memory blowup resulting from context sensitivity. We implemented three context- and flow-sensitive analyses on top of Chianina and scaled them to large C programs like Linux (17M LoC) on a single commodity PC.

    DOI: 10.1145/3453483.3454085


    Path-sensitive sparse analysis without path conditions

    作者: Shi, Qingkai and Yao, Peisen and Wu, Rongxin and Zhang, Charles
    关键词: program dependence graph, path sensitivity, Sparse analysis, SMT solving

    Abstract

    Sparse program analysis is fast as it propagates data flow facts via data dependence, skipping unnecessary control flows. However, when path-sensitively checking millions of lines of code, it is still prohibitively expensive because a huge number of path conditions have to be computed and solved via an SMT solver. This paper presents Fusion, a fused approach to inter-procedurally path-sensitive sparse analysis. In Fusion, the SMT solver does not work as a standalone tool on path conditions but directly on the program together with the sparse analysis. Such a fused design allows us to determine the path feasibility without explicitly computing path conditions, not only saving the cost of computing path conditions but also providing an opportunity to enhance the SMT solving algorithm. To the best of our knowledge, Fusion, for the first time, enables whole program bug detection on millions of lines of code in a common personal computer, with the precision of inter-procedural path-sensitivity. Compared to two state-of-the-art tools, Fusion is 10\texttimes{

    DOI: 10.1145/3453483.3454086


    Cypress (PLDI 2021 Artifact): Code and Benchmarks

    作者: Itzhaky, Shachar and Peleg, Hila and Polikarpova, Nadia and Rowe, Reuben N. S. and Sergey, Ilya
    关键词: cyclic proofs, program synthesis, program verification, separation logic

    Abstract

    Artifact accompanying the the paper Cyclic Program Synthesis published in proceedings of PLDI 2021.

    DOI: 10.1145/3453483.3454087


    Hashing modulo alpha-equivalence

    作者: Maziarz, Krzysztof and Ellis, Tom and Lawrence, Alan and Fitzgibbon, Andrew and Peyton Jones, Simon
    关键词: hashing, equivalence, abstract syntax tree

    Abstract

    In many applications one wants to identify identical subtrees of a program syntax tree. This identification should ideally be robust to alpha-renaming of the program, but no existing technique has been shown to achieve this with good efficiency (better than O(n2) in expression size). We present a new, asymptotically efficient way to hash modulo alpha-equivalence. A key insight of our method is to use a weak (commutative) hash combiner at exactly one point in the construction, which admits an algorithm with O(n (logn)2) time complexity. We prove that the use of the commutative combiner nevertheless yields a strong hash with low collision probability. Numerical benchmarks attest to the asymptotic behaviour of the method.

    DOI: 10.1145/3453483.3454088


    Phased Synthesis of Divide and Conquer Programs (Software Artifact)

    作者: Farzan, Azadeh and Nicolet, Victor
    关键词: Divide-And-Conquer Algorithms, Program Synthesis

    Abstract

    This software artifact implements the automatic methodology described in the paper. A README.md file has been provided with instructions on how to reproduce the results presented in the paper, as well as instructions on how to build the software from the provided sources.

    DOI: 10.1145/3453483.3454089


    Replication Package for Article: Snapshot-Free, Transparent, and Robust Memory Reclamation for Lock-Free Data Structures

    作者: Nikolaev, Ruslan and Ravindran, Binoy
    关键词: epoch-based reclamation, hazard pointers, lock-free, memory reclamation, non-blocking

    Abstract

    The artifact contains a VM image (VirtualBox) with preinstalled Ubuntu 18.04 and the (precompiled) benchmark. The artifact also contains source code and instructions for manual (bare-metal) installations. The artifact also includes our data measurements and scripts for generating plots. Please see README.txt for more details.

    DOI: 10.1145/3453483.3454090


    Artifact from “Logical Bytecode Reduction”

    作者: Kalhauge, Christian Gram and Palsberg, Jens
    关键词: input reduction

    Abstract

    The artifact is a Virtual Box containing the results and everything to reproduce the results of the paper. Furthermore, it contains the source code of jreduce.

    More information can be found in the REAMDE file in the artifact or at https://github.com/ucla-pls/pldi21-artifact/blob/master/README.md

    DOI: 10.1145/3453483.3454091


    Artifact for “Test-Case Reduction and Deduplication Almost for Free with Transformation-Based Compiler Testing”, PLDI 2021

    作者: Donaldson, Alastair F. and Thomson, Paul and Teliman, Vasyl and Milizia, Stefano and Maselco, Andr'{e
    关键词: Compilers, metamorphic testing, SPIR-V

    Abstract

    Artifact associated with PLDI paper, providing the version of the spirv-fuzz tool that was used for evaluation in the paper, together with the ability to reproduce a number of results using the SwiftShader implementation of SPIR-V, as well as data sets associated with the full set of experiments reported in the paper.

    DOI: 10.1145/3453483.3454092


    RevTerm

    作者: Chatterjee, Krishnendu and Goharshady, Ehsan Kafshdar and Novotn'{y
    关键词: Program Termination, Static Analysis

    Abstract

    RevTerm is a static analysis tool for proving non-termination of integer C programs (possibly with non-determinism). RevTerm is an implementation of our method for non-termination proving presented in the paper “Proving Non-termination by Program Reversal”.

    DOI: 10.1145/3453483.3454093


    COPSE artifact for Vectorized Secure Evaluation of Decision Forests

    作者: Malik, Raghav and Singhal, Vidush and Gottfried, Benjamin and Kulkarni, Milind
    关键词: Fully homomorphic encryption, vectorization

    Abstract

    The artifact is a Docker image that contains prebuilt versions of the benchmarks used to evaluate the paper, as well as a set of scripts to automatically run them and collect data.

    DOI: 10.1145/3453483.3454094


    Task parallel assembly language for uncompromising parallelism

    作者: Rainey, Mike and Newton, Ryan R. and Hale, Kyle and Hardavellas, Nikos and Campanoni, Simone and Dinda, Peter and Acar, Umut A.
    关键词: parallel programming languages, granularity control

    Abstract

    Achieving parallel performance and scalability involves making compromises between parallel and sequential computation. If not contained, the overheads of parallelism can easily outweigh its benefits, sometimes by orders of magnitude. Today, we expect programmers to implement this compromise by optimizing their code manually. This process is labor intensive, requires deep expertise, and reduces code quality. Recent work on heartbeat scheduling shows a promising approach that manifests the potentially vast amounts of available, latent parallelism, at a regular rate, based on even beats in time. The idea is to amortize the overheads of parallelism over the useful work performed between the beats. Heartbeat scheduling is promising in theory, but the reality is complicated: it has no known practical implementation. In this paper, we propose a practical approach to heartbeat scheduling that involves equipping the assembly language with a small set of primitives. These primitives leverage existing kernel and hardware support for interrupts to allow parallelism to remain latent, until a heartbeat, when it can be manifested with low cost. Our Task Parallel Assembly Language (TPAL) is a compact, RISC-like assembly language. We specify TPAL through an abstract machine and implement the abstract machine as compiler transformations for C/C++ code and a specialized run-time system. We present an evaluation on both the Linux and the Nautilus kernels, considering a range of heartbeat interrupt mechanisms. The evaluation shows that TPAL can dramatically reduce the overheads of parallelism without compromising scalability.

    DOI: 10.1145/3453483.3460969


    JPortal: precise and efficient control-flow tracing for JVM programs with Intel processor trace

    作者: Zuo, Zhiqiang and Ji, Kai and Wang, Yifei and Tao, Wei and Wang, Linzhang and Li, Xuandong and Xu, Guoqing Harry
    关键词: control-flow tracing, JVM, Intel PT

    Abstract

    Hardware tracing modules such as Intel Processor Trace perform continuous control-flow tracing of an end-to-end program execution with an ultra-low overhead. PT has been used in a variety of contexts to support applications such as testing, debugging, and performance diagnosis. However, these hardware modules have so far been used only to trace native programs, which are directly compiled down to machine code. As high-level languages (HLL) such as Java and Go become increasingly popular, there is a pressing need to extend these benefits to the HLL community. This paper presents JPortal, a JVM-based profiling tool that bridges the gap between HLL applications and low-level hardware traces by using a set of algorithms to precisely recover an HLL program’s control flow from PT traces. An evaluation of JPortal with the DaCapo benchmark shows that JPortal achieves an overall 80% accuracy for end-to-end control flow profiling with only a 4-16% runtime overhead.

    DOI: 10.1145/3453483.3454096


    Source code and virtual machine image for CompCertO: Compiling Certified Open C Components

    作者: Koenig, J'{e
    关键词: CompCert, Compilers, Compositional compiler correctness, Game semantics, Language interface, Simulation convention, Software verification

    Abstract

    This artifact contains the source code for CompCertO v0.1 and a virtual machine image where all required dependencies have been installed on a Debian GNU/Linux system. The virtual machine image also contains a pre-built version of CompCertO and its documentation, which can immediately be browsed in CoqIDE and Firefox.

    DOI: 10.1145/3453483.3454097


    Example-Guided Synthesis of Relational Queries

    作者: Thakkar, Aalok and Naik, Aaditya and Sands, Nathaniel and Alur, Rajeev and Naik, Mayur and Raghothaman, Mukund
    关键词: Program Synthesis, Programming-by-example

    Abstract

    Tool for end-to-end automated synthesis of relational queries from input-output examples.

    DOI: 10.1145/3453483.3454098


    Canary: practical static detection of inter-thread value-flow bugs

    作者: Cai, Yuandao and Yao, Peisen and Zhang, Charles
    关键词: static analysis, interference dependence, concurrency bugs detection, Concurrency

    Abstract

    Concurrent programs are still prone to bugs arising from the subtle interleavings of threads. Traditional static analysis for concurrent programs, such as data-flow analysis and symbolic execution, has to explicitly explore redundant control states, leading to prohibitive computational complexity. This paper presents a value flow analysis framework for concurrent programs called Canary that is practical to statically find diversified inter-thread value-flow bugs. Our work is the first to convert the concurrency bug detection to a source-sink reachability problem, effectively reducing redundant thread interleavings. Specifically, we propose a scalable thread-modular algorithm to capture data and interference dependence in a value-flow graph. The relevant edges of value flows are annotated with execution constraints as guards to describe the conditions of value flows. Canary then traverses the graph to detect concurrency defects via tracking the source-sink properties and solving the aggregated guards of value flows with an SMT solver to decide the realizability of interleaving executions. Experiments show that Canary is precise, scalable and practical, detecting over eighteen previously unknown concurrency bugs in large, widely-used software systems with low false positives.

    DOI: 10.1145/3453483.3454099


    Replication Package for Robustness Certification with Generative Models

    作者: Mirman, Matthew and H"{a
    关键词: Abstract Interpretation, Deep Learning, Verification

    Abstract

    This is the code necessary to reproduce the experiments found within the paper.

    DOI: 10.1145/3453483.3454100


    Software Artifacts for Paper: "Execution Reconstruction: Harnessing Failure Reoccurrences for Failure Reproduction "

    作者: Zuo, Gefei and Ma, Jiacheng and Quinn, Andrew and Bhatotia, Pramod and Fonseca, Pedro and Kasikci, Baris
    关键词: debugging, klee, symbolic execution

    Abstract

    Components: 1. README.txt: instructions to walk through the artifact. 2. er-docker.tar.gz: the docker image of the main artifact Inside the docker image, /ER is a snapshot of our source code repository: a. Sherperded Symbolic Execution: modified based on KLEE. (1) Symbolic Execution Engine (2) POSIX runtime b. Key Data Value Selection: (1) KLEE Constraint Graph to DOT or JSON converter (2) Constraint Graph visualization: DOT viewer Gephi (external tool), Gephi python plugin for better visualization (3) Graph analysis script c. Data Recording (PTWrite) Instrumentation: (1) cmdline tool (2) instrumentation pass d. Software Execution Trace: examination tool e. Artifact and Docker image building instructions 3. LICENSE: Software license of ER and its dependencies.

    Purpose: ER is a hybrid failure reproduction tool utilizing symbolic execution and record/replay. At runtime, ER collects control-flow traces of reoccurring failures and incrementally traces selective data values everytime failure reoccurs. At offline, ER runs symbolic execution (KLEE) to gather path constraints of the failure-incurring input and reconstruct input data by constraint solving. When the path constraints become too complex for solver to reason, ER analyzes the constraint and instruct runtime data tracing to also record data which if known can simplify the complex constraint.

    After such iterative procedures of online tracing and offline symbolic execution, ER generates the failure-incurring input which is guaranteed to reproduce the reoccurring failure.

    DOI: 10.1145/3453483.3454101


    Replication Package for Article: Quantitative Analysis of Assertion Violations in Probabilistic Programs

    作者: Wang, Jinyi and Sun, Yican and Fu, Hongfei and Chatterjee, Krishnendu and Goharshady, Amir Kafshdar
    关键词: Assertion, Automated Verification, Probabilistic Programs

    Abstract

    This artifact contains our implementation of three synthesis algorithms as described in the paper. Algorithm 1&2 are listed in section 5.1 \& 5.2 and Algorithm 3 is in section 6.

    DOI: 10.1145/3453483.3454102


    IOOpt: automatic derivation of I/O complexity bounds for affine programs

    作者: Olivry, Auguste and Iooss, Guillaume and Tollenaere, Nicolas and Rountev, Atanas and Sadayappan, P. and Rastello, Fabrice
    关键词: Tensor Contraction, Polyhedral model, I/O complexity, Convolution, Compilation

    Abstract

    Evaluating the complexity of an algorithm is an important step when developing applications, as it impacts both its time and energy performance. Computational complexity, which is the number of dynamic operations regardless of the execution order, is easy to characterize for affine programs. Data movement (or, I/O) complexity is more complex to evaluate as it refers, when considering all possible valid schedules, to the minimum required number of I/O between a slow (e.g. main memory) and a fast (e.g. local scratchpad) storage location. This paper presents IOOpt, a fully automated tool that automatically bounds the data movement of an affine (tilable) program. Given a tilable program described in a DSL, it automatically computes: 1. a lower bound of the I/O complexity as a symbolic expression of the cache size and program parameters; 2. an upper bound that allows one to assess the tightness of the lower bound; 3. a tiling recommendation (loop permutation and tile sizes) that matches the upper bound. For the lower bound algorithm which can be applied to any affine program, a substantial effort has been made to provide bounds that are as tight as possible for neural networks: In particular, it extends the previous work of Olivry et al. to handle multi-dimensional reductions and expose the constraints associated with small dimensions that are present in convolutions. For the upper bound algorithm that reasons on the tile band of the program (e.g. output of a polyhedral compiler such as PluTo), the algebraic computations involved have been tuned to behave well on tensor computations such as direct tensor contractions or direct convolutions. As a bonus, the upper bound algorithm that has been extended to multi-level cache can provide the programmer with a useful tiling recommendation. We demonstrate the effectiveness of our tool by deriving the symbolic lower and upper bounds for several tensor contraction and convolution kernels. Then we evaluate numerically the tightness of our bound using the convolution layers of Yolo9000 and representative tensor contractions from the TCCG benchmark suite. Finally, we show the pertinence of our I/O complexity model by reporting the running time of the recommended tiled code for the convolution layers of Yolo9000.

    DOI: 10.1145/3453483.3454103


    Artifact for the paper Specification Synthesis with Constrained Horn Clauses

    作者: Prabhu, Sumanth and Fedyukovich, Grigory and Madhukar, Kumar and D’Souza, Deepak
    关键词: automated verification, inductive invariants, SMT solvers, specification synthesis

    Abstract

    The artifact is a zip file consisting of: pldi21.ova - a VirtualBox image consisting of tools to reproduce the data from the paper pldi21.md5 - md5sum of pldi21.ova README - instructions on how to use the artifact

    DOI: 10.1145/3453483.3454104


    Mirror: Making Lock-Free Data Structures Persistent

    作者: Friedman, Michal and Petrank, Erez and Ramalhete, Pedro
    关键词: concurrent data structures, lock-free, Non-volatile memory

    Abstract

    This artifact provides a way to test different concurrent, persistent and lock-free data structures that were specifically designed for non-volatile memory.

    DOI: 10.1145/3453483.3454105


    Replication Package for Article: AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformations

    作者: Zhao, Jie and Li, Bojie and Nie, Wang and Geng, Zhen and Zhang, Renwei and Gao, Xiong and Cheng, Bin and Wu, Chen and Cheng, Yun and Li, Zheng and Di, Peng and Zhang, Kun and Jin, Xuefeng
    关键词: auto-tuning, Deep neural networks, neural processing units, polyhedral model, TVM

    Abstract

    This is the artifact description for our paper “AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformations”. Our paper presents a fully automatic framework that automatically deploys and tunes deep learning models on NPUs and the target for the experiment is a Huawei Acend910 accelerator. The best way to perform the artifact evaluation is to purchase such an accelerator. One can also ask for a remotely accessible server from Huawei that has been equipped with a kc_air port to allow for the artifact evaluation. The kc_air port provides a RPC (Remote Procedure Call) mechanism on top of an x86 machine and can be used to evaluate some of the results described in the paper.

    We recorded the numbers of cycles for each benchmark used in the experiment and compared the performance with two approaches, TVM and the Huawei Ascend libraries. The relation between the number of cycles and 1 microsecond has been formulated under Figure 11 of the paper. The standard TVM cannot be applied to generate code for the Huawei Ascend910 chips. The TVM version is thus not the standard one developed by Tianqi Chen et al. but has been adapted by the engineers of Huawei. We refer to this adapted version as Huawei TVM in the following context. We cannot provide the code or schedule templates generated by Huawei TVM and the vendor libraries because these may violate the to Huawei’s confidentiality agreements. We feel very sorry about this.

    However, we have asked for the permission from the company and tried our best to make this artifact evaluation as functional as described in our paper. In this artifact description, we mainly focus on the generated code of our framework: one is allowed to play with our tool by generating code and running the examples used in the experiment of our paper.

    DOI: 10.1145/3453483.3454106


    Replication Package for Frequent Background Polling on a Shared Thread, using Light-Weight Compiler Interrupts

    作者: Basu, Nilanjana and Montanari, Claudio and Eriksson, Jakob
    关键词: Compiler Interrupts, interrupt accuracy and overhead

    Abstract

    The artifact contains the libraries for a Compiler Interrupt pass, \& the code for all experiments reported in the paper.

    DOI: 10.1145/3453483.3454107


    Satisfiability modulo ordering consistency theory for multi-threaded program verification

    作者: He, Fei and Sun, Zhihang and Fan, Hongyu
    关键词: satisfiability modulo theory, memory consistency model, concurrency, Program verification

    Abstract

    Analyzing multi-threaded programs is hard due to the number of thread interleavings. Partial orders can be used for modeling and analyzing multi-threaded programs. However, there is no dedicated decision procedure for solving partial-order constraints. In this paper, we propose a novel ordering consistency theory for multi-threaded program verification under sequential consistency, and we elaborate its theory solver, which realizes incremental consistency checking, minimal conflict clause generation, and specialized theory propagation to improve the efficiency of SMT solving. We conducted extensive experiments on credible benchmarks; the results show significant promotion of our approach.

    DOI: 10.1145/3453483.3454108


    Bliss: auto-tuning complex applications using a pool of diverse lightweight learning models

    作者: Roy, Rohan Basu and Patel, Tirthak and Gadepally, Vijay and Tiwari, Devesh
    关键词: Parameter tuning, Auto-tuning HPC applications

    Abstract

    As parallel applications become more complex, auto-tuning becomes more desirable, challenging, and time-consuming. We propose, Bliss, a novel solution for auto-tuning parallel applications without requiring apriori information about applications, domain-specific knowledge, or instrumentation. Bliss demonstrates how to leverage a pool of Bayesian Optimization models to find the near-optimal parameter setting 1.64\texttimes{

    DOI: 10.1145/3453483.3454109


    Replication Package for Article: Termination Analysis without the Tears

    作者: Zhu, Shaowei and Kincaid, Zachary
    关键词: algebraic path problems, Algebraic program analysis, loop summarization, termination analysis

    Abstract

    The artifact is a virtual machine that contains ComPACT, a compositional and monotone termination analysis for C programs. The virtual machine also contains all softwares and their dependencies required to replicate the experimental results of PLDI 2021 paper “Termination Analysis without the Tears”.

    DOI: 10.1145/3453483.3454110


    Probabilistic Termination Analysis Tools for: On Probabilistic Termination of Functional Programs with Continuous Distributions

    作者: Beutner, Raven and Ong, Luke
    关键词: almost-sure termination, functional programs, lower bounds, Probabilistic programs, termination

    Abstract

    Tools for computing lower bounds on the probability of termination (called LowerBound) and verification of AST of non-affine recursive programs (called astnar) for programs with continuous distributions. The tools build upon the theoretical results in the PLDI paper “On Probabilistic Termination of Functional Programs with Continuous Distributions”.

    DOI: 10.1145/3453483.3454111


    CoSplit (PLDI 2021 Artefact)

    作者: P^{\i
    关键词: automatic parallelisation, blockchain, sharding, smart contracts, static analysis

    Abstract

    Virtual machine: The CoSplit.ova file is a virtual machine image containing the full artefact, including the CoSplit static analysis, its integration with the Zilliqa blockchain, the benchmark suite used for evaluation, the Ethereum dataset, and the Jupyter notebook used to analyse the dataset. This is the artefact that was evaluated during the PLDI 2021 Artifact Evaluation process.

    The virtual machine image was generated using Virtual Box Version 6.1.18 r142142 and is known to work with that version of the software.

    Source code: Please download cosplit-artefact-archive.zip. This includes the full source code, including dependencies, and the Ethereum dataset. The archive produced by GitHub (dranov/cosplit-artefact-v0.1-beta.zip) does not include the dependencies and dataset.

    DOI: 10.1145/3453483.3454112



评论
avatar
💦非常忙碌!
逸翎清晗🌈
Talk is cheap, show me the code.💎
GitHub
公告栏
--- 主域名 ---
www.yangzi.world | yangzi.world
推荐实用资料工具目录
yangzi.world/pages/opensources.html
--- 旅游分享 ---
🍧yangzi.world/iternery/index.html
--- 安卓APP ---
🍧点此下载🍧

目录
  1. Replication Package for Article: Incremental Whole-Program Analysis in Datalog with Lattices
    1. Abstract
  • Mechanized Proof and Model Checker for Article: Revamping Hardware Persistency Models
    1. Abstract
  • Revamping Hardware Persistency Models: View-Based and Axiomatic Persistency Models for Intel-x86 and Armv8
    1. Contributions (paper §1)
    2. Artifacts
  • Repairing serializability bugs in distributed database programs via automated schema refactoring
    1. Abstract
  • Artifact for PLDI 2021 Paper Gleipnir: Toward Practical Error Analysis for Quantum Programs
    1. Abstract
  • Alive2: bounded translation validation for LLVM
    1. Abstract
  • Coq Development for “Transfinite Iris: Resolving an Existential Dilemma of Step-Indexed Separation Logic”
    1. Abstract
  • Perceus: garbage free reference counting with reuse
    1. Abstract
  • PUMPKIN Pi
    1. Abstract
  • Compiler-Assisted Object Inlining with Value Fields
    1. Abstract
  • Unleashing the hidden power of compiler optimization on binary code difference: an empirical study
    1. Abstract
  • Artifact and Appendix of “RefinedC: Automating the Foundational Verification of C Code with Refined Ownership Types”
    1. Abstract
  • Replication Package for Artifact: “Wire Sorts: A Language Abstraction for Safe Hardware Composition”
    1. Abstract
  • DeepCuts: a deep learning optimization framework for versatile GPU workloads
    1. Abstract
  • Replication Package for Article: Retrofitting Effect Handlers onto OCaml
    1. Abstract
  • Replication Package for Article: Unqomp: Synthesizing Uncomputation in Quantum Circuits
    1. Abstract
  • Zooid: a DSL for Certified Multiparty Computation
    1. Abstract
  • Fluid: a framework for approximate concurrency via controlled dependency relaxation
    1. Abstract
  • Developer and user-transparent compiler optimization for interactive applications
    1. Abstract
  • Artifact for Article: Demanded Abstract Interpretation
    1. Abstract
  • Learning to find naming issues with big code and small supervision
    1. Abstract
  • DIY assistant: a multi-modal end-user programmable virtual assistant
    1. Abstract
  • Web question answering with neurosymbolic program synthesis
    1. Abstract
  • Replication Package for Article: RbSyn: Type- and Effect-Guided Program Synthesis
    1. Abstract
  • High Performance Correctly Rounded Math Libraries for 32-bit Floating Point Representations
    1. Abstract
    2. List of float functions supported by RLIBM-32
    3. List of posit32 functions supported by RLIBM-32
  • Porcupine: a synthesizing compiler for vectorized homomorphic encryption
    1. Abstract
  • Replication Package for: Concolic Program Repair
    1. Abstract
  • Artifact: Concise, Type-Safe, and Efficient Structural Diffing
    1. Abstract
  • CoStar parser implementation, correctness proofs, and performance evaluation
    1. Abstract
  • COMFORT: v1.0
    1. Abstract
  • Artifact Evaluation for “Beyond the Elementary Representations of Program Invariants over Algebraic Data Types” paper
    1. Abstract
  • Artifact for PLDI’21 paper #156 “Fast and Precise Certification of Transformers”
    1. Abstract
  • Static Control-Flow Analyzers for the Article: Trace-Based Control-Flow Analysis
    1. Abstract
  • Replication package for the article: Compiling Stan to Generative Probabilistic Languages and Extension to Deep Probabilistic Programming
    1. Abstract
  • Evaluted Artifact for: Filling Typed Holes with Live GUIs
    1. Abstract
  • Artifact for “Concurrent Deferred Reference Counting with Constant-Time Overhead”
    1. Abstract
  • Software artifact for the paper “Quantum Abstract Interpretation”
    1. Abstract
  • Replication Package for Article: Central Moment Analysis for Cost Accumulators in Probabilistic Programs
    1. Abstract
  • Synthesizing data structure refinements from integrity constraints
    1. Abstract
  • Replication Package for Article: Provable Repair of Deep Neural Networks
    1. Abstract
  • Replication Package for Article: Integration Verification across Software and Hardware for a Simple Embedded System
    1. Abstract
  • dZ3: Artifact for “Symbolic Boolean Derivatives for Efficiently Solving Extended Regular Expression Constraints”
    1. Abstract
  • Abstraction for conflict-free replicated data types
    1. Abstract
  • Artifact Evaluation for “Boosting SMT Solver Performance on Mixed-Bitwise-Arithmetic Expressions”
    1. Abstract
  • Distance-in-time versus distance-in-space
    1. Abstract
  • Artifact for Paper: An Efficient Interpreter for Datalog by De-specializing Relations
    1. Abstract
  • Replication Package for Article: Adaptive Restarts for Stochastic Synthesis
    1. Abstract
  • Source Code and Case Studies for Scooter & Sidecar: A Domain-Specific Approach to Writing Secure Database Migrations
    1. Abstract
  • Artifact: When Threads Meet Events: Efficient and Precise Static Race Detection with Origins
    1. Abstract
  • Replication Package for Viaduct: An Extensible, Optimizing Compiler for Secure Distributed Programs
    1. Abstract
  • Replication package for Reticle: A Virtual Machine for Programming Modern FPGAs
    1. Abstract
  • Polynomial reachability witnesses via Stellens"{a
    1. Abstract
  • Replication Package for Article: Sound Probabilistic Inference via Guide Types
    1. Abstract
  • System Implementation and Experiments for SPPL: Probabilistic Programming with Fast Exact Symbolic Inference
    1. Abstract
  • Reverse engineering for reduction parallelization via semiring polynomials
    1. Abstract
  • DreamCoder software and data
    1. Abstract
  • Automatically enforcing fresh and consistent inputs in intermittent systems
    1. Abstract
  • Artifact for the paper “Modular Data-Race-Freedom Guarantees in the Promising Semantics”
    1. Abstract
  • DNNFusion: accelerating deep neural networks execution with advanced operator fusion
    1. Abstract
  • SyRust Artifact: PLDI2021 Artifact
    1. Abstract
  • Chianina: an evolving graph system for flow- and context-sensitive analyses of million lines of C code
    1. Abstract
  • Path-sensitive sparse analysis without path conditions
    1. Abstract
  • Cypress (PLDI 2021 Artifact): Code and Benchmarks
    1. Abstract
  • Hashing modulo alpha-equivalence
    1. Abstract
  • Phased Synthesis of Divide and Conquer Programs (Software Artifact)
    1. Abstract
  • Replication Package for Article: Snapshot-Free, Transparent, and Robust Memory Reclamation for Lock-Free Data Structures
    1. Abstract
  • Artifact from “Logical Bytecode Reduction”
    1. Abstract
  • Artifact for “Test-Case Reduction and Deduplication Almost for Free with Transformation-Based Compiler Testing”, PLDI 2021
    1. Abstract
  • RevTerm
    1. Abstract
  • COPSE artifact for Vectorized Secure Evaluation of Decision Forests
    1. Abstract
  • Task parallel assembly language for uncompromising parallelism
    1. Abstract
  • JPortal: precise and efficient control-flow tracing for JVM programs with Intel processor trace
    1. Abstract
  • Source code and virtual machine image for CompCertO: Compiling Certified Open C Components
    1. Abstract
  • Example-Guided Synthesis of Relational Queries
    1. Abstract
  • Canary: practical static detection of inter-thread value-flow bugs
    1. Abstract
  • Replication Package for Robustness Certification with Generative Models
    1. Abstract
  • Software Artifacts for Paper: "Execution Reconstruction: Harnessing Failure Reoccurrences for Failure Reproduction "
    1. Abstract
  • Replication Package for Article: Quantitative Analysis of Assertion Violations in Probabilistic Programs
    1. Abstract
  • IOOpt: automatic derivation of I/O complexity bounds for affine programs
    1. Abstract
  • Artifact for the paper Specification Synthesis with Constrained Horn Clauses
    1. Abstract
  • Mirror: Making Lock-Free Data Structures Persistent
    1. Abstract
  • Replication Package for Article: AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformations
    1. Abstract
  • Replication Package for Frequent Background Polling on a Shared Thread, using Light-Weight Compiler Interrupts
    1. Abstract
  • Satisfiability modulo ordering consistency theory for multi-threaded program verification
    1. Abstract
  • Bliss: auto-tuning complex applications using a pool of diverse lightweight learning models
    1. Abstract
  • Replication Package for Article: Termination Analysis without the Tears
    1. Abstract
  • Probabilistic Termination Analysis Tools for: On Probabilistic Termination of Functional Programs with Continuous Distributions
    1. Abstract
  • CoSplit (PLDI 2021 Artefact)
    1. Abstract
  • 最新文章
    公开数据
    文章数目 :
    168
    本站总字数 :
    27.5w
    本站访客数 :
    本站总访问量 :
    最后更新时间 :
    空降评论复制本文地址
    随便逛逛昼夜切换关于博客美化设置切换全屏打印页面