Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using a new child process upon each request #3

Open
thkang2 opened this issue Aug 2, 2016 · 4 comments
Open

using a new child process upon each request #3

thkang2 opened this issue Aug 2, 2016 · 4 comments

Comments

@thkang2
Copy link

thkang2 commented Aug 2, 2016

Won't it be extremely large overhead to boot up a child haskell process and communicate it via stdin/stdout? Even if the haskell code is going to be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any other options, as I have interest in running haskell code on aws lambda myself.

Thanks!

@bwbaugh
Copy link

bwbaugh commented Aug 2, 2016

Disclaimer: I've only done toy hello world examples so far with Haskell on
lambda.

Ideally the executable would be running in the background, frozen from an
earlier lambda invocation. (I don't think the current code allows for this,
but it should be possible to make it do so.)

I'm not sure what the other options would be, aside from communicating with
something other than stdin / stdout. (HTTP, TCP?)

With that said, I've heard Haskell has a pretty low overhead for starting
up (see link below). Perhaps even with starting a new executable each time
it'll be "fast enough", at least for a proof of concept that can later be
optimized with a little more work?

http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-replacement-for-0install/#speed-and-size

On Tuesday, August 2, 2016, thkang2 [email protected] wrote:

Won't it be extremely large overhead to boot up a child haskell process
and communicate it via stdin/stdout? Even if the haskell code is going to
be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any
other options, as I have interest in running haskell code on aws lambda
myself.

Thanks!


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3, or mute the
thread
https://github.com/notifications/unsubscribe-auth/ACG1ePAkgRE-wTGoMVmJcVu1wLtN-2Fbks5qbxZGgaJpZM4JacTR
.

@abailly
Copy link
Owner

abailly commented Aug 2, 2016

Hi Wesley,

What do you mean by "frozen" ? Having some process forked which runs in the
background and with which front-end communicates through TCP ? This assumes
that the container is not immediately garbaged and can be reused across
invocations. Am I correctly understanding what you suggest?

I think that startup overhead of Haskell process is indeed pretty slow,
being native code. There is nothing like boot time you can observe on a JVM.

Le 2 août 2016 15:56, "Wesley Baugh" [email protected] a écrit :

Disclaimer: I've only done toy hello world examples so far with Haskell on
lambda.

Ideally the executable would be running in the background, frozen from an
earlier lambda invocation. (I don't think the current code allows for this,
but it should be possible to make it do so.)

I'm not sure what the other options would be, aside from communicating with
something other than stdin / stdout. (HTTP, TCP?)

With that said, I've heard Haskell has a pretty low overhead for starting
up (see link below). Perhaps even with starting a new executable each time
it'll be "fast enough", at least for a proof of concept that can later be
optimized with a little more work?

http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-replacement-for-0install/#speed-and-size

On Tuesday, August 2, 2016, thkang2 [email protected] wrote:

Won't it be extremely large overhead to boot up a child haskell process
and communicate it via stdin/stdout? Even if the haskell code is going to
be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any
other options, as I have interest in running haskell code on aws lambda
myself.

Thanks!


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3, or mute the
thread
<
https://github.com/notifications/unsubscribe-auth/ACG1ePAkgRE-wTGoMVmJcVu1wLtN-2Fbks5qbxZGgaJpZM4JacTR

.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACdHc9_6dD6Cb5k62xiHiqVS5MyRz7yks5qb0ybgaJpZM4JacTR
.

@bwbaugh
Copy link

bwbaugh commented Aug 2, 2016

Yes, the Lambda docs indicate that subsequent invocations MAY reuse an
existing process (not guaranteed). I don't think you'd need TCP to take
advantage of this.

Are you saying JVM boot time is slow? How does the overhead of starting a
Haskell executable differ from starting a Python executable
(interpreter) or a Node executable etc.? For the runtimes that Lambda
natively supports, you may not notice the startup overhead if you run the
function a lot due to reusing containers // the freeze-thaw process cycle.
However, if you invoke lambda functions infrequently then no matter the
runtime (even adding in Haskell) you will have slow (relatively) starts on
the first lambda invocation.

I'd suggest trying it out and measure the timings / overhead for yourself
(or get them into the README since this will likely be a FAQ). I do think
the best option is to start a Haskell executable once when the lambda
starts and then communicate with it for subsequent invocations, but you
have to be careful that it's done correctly otherwise you end up with
issues like #2.

On Tuesday, August 2, 2016, Arnaud Bailly [email protected] wrote:

Hi Wesley,

What do you mean by "frozen" ? Having some process forked which runs in the
background and with which front-end communicates through TCP ? This assumes
that the container is not immediately garbaged and can be reused across
invocations. Am I correctly understanding what you suggest?

I think that startup overhead of Haskell process is indeed pretty slow,
being native code. There is nothing like boot time you can observe on a
JVM.

Le 2 août 2016 15:56, "Wesley Baugh" <[email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');> a écrit :

Disclaimer: I've only done toy hello world examples so far with Haskell
on
lambda.

Ideally the executable would be running in the background, frozen from an
earlier lambda invocation. (I don't think the current code allows for
this,
but it should be possible to make it do so.)

I'm not sure what the other options would be, aside from communicating
with
something other than stdin / stdout. (HTTP, TCP?)

With that said, I've heard Haskell has a pretty low overhead for starting
up (see link below). Perhaps even with starting a new executable each
time
it'll be "fast enough", at least for a proof of concept that can later be
optimized with a little more work?

http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-replacement-for-0install/#speed-and-size

On Tuesday, August 2, 2016, thkang2 <[email protected]
javascript:_e(%7B%7D,'cvml','[email protected]');> wrote:

Won't it be extremely large overhead to boot up a child haskell process
and communicate it via stdin/stdout? Even if the haskell code is going
to
be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any
other options, as I have interest in running haskell code on aws lambda
myself.

Thanks!


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#3, or mute the
thread
<

https://github.com/notifications/unsubscribe-auth/ACG1ePAkgRE-wTGoMVmJcVu1wLtN-2Fbks5qbxZGgaJpZM4JacTR

.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<
#3 (comment)
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AACdHc9_6dD6Cb5k62xiHiqVS5MyRz7yks5qb0ybgaJpZM4JacTR

.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
#3 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ACG1eCwdpYH_VinSKDqiZWnjVtsJEA_9ks5qb1X_gaJpZM4JacTR
.

@metaleap
Copy link

metaleap commented Oct 1, 2017

The screenshot says it already really: "request duration 118ms, billed 200ms". Seems to have taken about 100x what you'd shoot for, for a server-side hello-world responder. Maybe that's just the first "warm-up" request, have you compared after hitting the program with multiple subsequent requests?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants