11Are We Fast Yet? Comparing Language Implementations with Objects, Closures, Arrays, and Strings
22===================================================================================================
33
4- [ ![ Build Status] ( https://travis-ci.org /smarr/are-we-fast-yet. svg?branch=master )] ( https://travis-ci.org /smarr/are-we-fast-yet )
4+ [ ![ Build Status] ( https://github.com /smarr/are-we-fast-yet/actions/workflows/ci.yml/badge. svg )] ( https://github.com /smarr/are-we-fast-yet/actions/workflows/ci.yml )
55
66## Goal
77
88The goal of this project is to assess whether a language implementation is
9- highly optimizing and thus is able to remove the overhead of programming
10- abstractions and frameworks. We are interested in comparing language
11- implementations with each other and optimize their compilers as well as the
9+ * highly optimizing* and thus able to remove the overhead of programming
10+ abstractions and frameworks. We are interested in * comparing language
11+ implementations* (not _ languages _ !) with each other and optimize their compilers as well as the
1212run-time representation of objects, closures, arrays, and strings.
1313
1414This is in contrast to other projects such as the [ Computer Language Benchmark
1515game] [ CLBG ] , which encourage finding the
1616smartest possible way to express a problem in a language to achieve best
17- performance.
17+ performance, an equally interesting but different problem .
1818
19- ##### Approach
19+ #### Approach
2020
2121To allow us to compare the degree of optimization done by the implementations
2222as well as the absolute performance achieved, we set the following basic rules:
2323
24- 1 . The benchmark is 'identical' for all languages.
24+ 1 . The benchmark is 'identical' for all languages.
2525 This is achieved by relying only on a widely available and commonly used
2626 subset of language features and data types.
2727
28- 2 . The benchmarks should use language 'idiomatically'.
28+ 2 . The benchmarks should use language 'idiomatically'.
2929 This means, they should be realized as much as possible with idiomatic
3030 code in each language, while relying only on the core set of abstractions.
3131
@@ -34,7 +34,7 @@ For a description of the set of common language abstractions see [the *core*
3434language] ( docs/core-language.md ) document.
3535
3636The initial publication describing the project is [ Cross-Language Compiler
37- Benchmarking: Are We Fast Yet?] [ 3 ] and can be cited as follows :
37+ Benchmarking: Are We Fast Yet?] [ 3 ] and can be cited as ( [ bib file ] [ 28 ] ) :
3838
3939 > Stefan Marr, Benoit Daloze, Hanspeter Mössenböck. 2016.
4040 > [ Cross-Language Compiler Benchmarking: Are We Fast Yet?] [ 4 ]
@@ -221,7 +221,7 @@ benchmarks.
221221
222222- [ Simple Object Machine Implementation in a Functional Programming Language] [ 20 ]
223223 Filip Říha. Bachelor's Thesis, CTU Prague, 2023.
224-
224+
225225- [ Supporting multi-scope and multi-level compilation in a
226226 meta-tracing just-in-time compiler] [ 23 ]
227227 Y. Izawa. PhD Dissertation. Tokyo Institute of Technology, 2023.
@@ -326,6 +326,7 @@ benchmarks.
326326 [ 25 ] : https://stefan-marr.de/downloads/acmsac23-huang-et-al-optimizing-the-order-of-bytecode-handlers-in-interpreters-using-a-genetic-algorithm.pdf
327327 [ 26 ] : https://drops.dagstuhl.de/opus/volltexte/2019/10796/pdf/LIPIcs-ECOOP-2019-4.pdf
328328 [ 27 ] : http://www.jot.fm/issues/issue_2022_02/article2.pdf
329+ [ 28 ] : https://github.com/smarr/are-we-fast-yet/blob/master/CITATION.bib
329330
330331 [ CD ] : https://www.cs.purdue.edu/sss/projects/cdx/
331332 [ CDjs ] : https://github.com/WebKit/webkit/tree/master/PerformanceTests/JetStream/cdjs
0 commit comments