You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| Moderate | Medium | Haskell, C |[Marco Zocca](mailto:zocca.marco gmail) |
39
+
| Moderate | Medium | Haskell, C |[Marco Zocca](mailto:zocca.marco@gmail)|
40
+
41
+
## Expand json-autotype support to other functional programming languages
42
+
43
+
[json-autotype](http://github.com/mgajda/json-autotype) is currently the *most advanced* type and parser generator from JSON to Haskell. Using union types it works as most accurate type providers.
44
+
45
+
It was of great outside interest to enhance it to produce types and parsers for other strongly-typed programming languages like Elm, PureScript, Scala, Elixir, F#, or Java.
46
+
This project is of great practical importance, and will allow us to pitch Haskell as practical language to other strongly-typed language communities..
## Expand json-autotype to directly query whole Web APIs, and create Servant endpoints
55
+
56
+
[json-autotype](http://github.com/mgajda/json-autotype) is currently the *most advanced* type and parser generator from JSON to Haskell.
57
+
However most programmers are not using JSON just to read and write it, but to interact with Web APIs. One would dream about generating the whole WebAPI types, and either client calls, and server endpoints just from Swagger descriptions, or a bunch of queries to the WebAPIs.
58
+
59
+
Implementing this project would put Haskell as possibly the best language to interact with WebAPIs in typed way. It would also make it superior to pure GraphQL solutions which are state-of-art now.
| Difficult | High | Haskell, Servant or Apiary |[Michal Gajda](mailto:migamake@migamake.com)|
66
+
67
+
## Automatic finding of Data Science libraries for the project
68
+
69
+
As our package database ([Hackage](https://hackage.haskell.org)) grows, we get more and more data formats that can be easily parsed by our code. But it is increasingly difficult to find relevant parsers, and check whether they support given file format out-of-the-box.
70
+
71
+
Data Scientist interested in quick-and-dirty analyses need to either:
72
+
* spend some time just on parser and library discover for new formats,
73
+
* limit themselves to projects that only use formats they know well (like `.csv`),
74
+
* fall back to inefficient process using dynamically typed languages like Python and Octave.
75
+
76
+
It would be great to use Stackage and Hackage to establish a database of supported formats,
77
+
along with demonstration code that reads given file type.
78
+
First we can build the tool that automatically recognizes types and tries to parse them with the standard libraries, then we put it onto the web service, that takes data directory, and gives Haskell program that parses all the data inside.
0 commit comments