Skip to content
This repository was archived by the owner on Jun 21, 2021. It is now read-only.

Best practices for using cloud-haskell in an online multiplayer game? #19

Closed
OlivierSohn opened this issue Feb 4, 2018 · 7 comments
Closed

Comments

@OlivierSohn
Copy link

Hello,

I'm looking for advice on how to turn my single-player local game into an online multiplayer cooperative game, using cloud-haskell for networking:

In this game, each player controls a ship, and can fire a laser from that ship.

So Client nodes will send player events to the server node, and the server will periodically send game updates to the clients.

Are there some real-world examples, or tutorials that I could look at for inspiration?

My thinking sofar is to do something like in ChatClient.hs ChatServer.hs to "connect" the client to the server, and then use typed channels to transfer player events and game updates between them. In case I am missing an important aspect, please tell me!

Thank you :)

@OlivierSohn OlivierSohn changed the title Best practices for using cloud-haskell in an online multiplayer game use case? Best practices for using cloud-haskell in an online multiplayer game? Feb 4, 2018
@OlivierSohn
Copy link
Author

While prototyping, I encoutered the following issue:

To capture player input (key presses), I poll events using glfwPollEvents which needs to be called from the main thread, once per game loop.

But when calling glfwPollEvents from code passed to runProcess, I have this crash, indicating I'm not on the main thread anymore:

2018-02-04 19:57:51.104 imj-game-hamazed-exe[64868:8363909] *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'nextEventMatchingMask should only be called from the Main Thread!'
*** First throw call stack:
(
	0   CoreFoundation                      0x00007fff4e696fcb __exceptionPreprocess + 171
	1   libobjc.A.dylib                     0x00007fff75338c76 objc_exception_throw + 48
	2   AppKit                              0x00007fff4c389faf -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 4167
	3   imj-game-hamazed-exe                0x000000010f979933 _glfwPlatformPollEvents + 147
	4   imj-game-hamazed-exe                0x000000010f96d168 GLFWzmbzm1zi4zi8zi1zmLtjCFy18WuD8hyYAe76SCE_GraphicsziUIziGLFW_pollEvents1_info + 112
)
libc++abi.dylib: terminating with uncaught exception of type NSException
Abort trap: 6

So my question then is: is there a way to run code inside the Process monad, while being also on the main thread?

@facundominguez
Copy link
Contributor

facundominguez commented Feb 5, 2018 via email

@OlivierSohn
Copy link
Author

@facundominguez I like the idea of running in an auxiliary thread, I think it will incur less overhead and be more flexible than calling runProcess inside the loop. In the meantime I tried a websocket-based approach, where I had to use auxiliary threads, and channels to communicate with them, so it will probably be easily refactorable, once it works, to test with cloud-haskell and compare approaches.

I'm not familiar with asynchronous exceptions but from what you explain, the project of making it run on the calling thread sounds tricky indeed! Almost as if it would go against a fundamental underlying design concept? I wonder if there is a fundamental reason why runProcess doesn't run on the calling thread (is it easier to implement supervision this way?), and if such design decisions are publicly documented somewhere?

Thanks for your support!
Olivier

@facundominguez
Copy link
Contributor

facundominguez commented Feb 5, 2018 via email

@hyperthunk
Copy link
Member

There's a /huge/ thread about this... Let me see if I can dig it up over the weekend and post back.

@OlivierSohn
Copy link
Author

@hyperthunk Thanks, I'd be interested in reading this thread, if you find it!

@hyperthunk
Copy link
Member

@OlivierSohn sorry this conversation went dead! I had some personal issues that kept me away from the project, but it's under active development again now. If you're still using (or interested in using) cloud haskell, then let me know and I'll migrate this issue to an active repository and we can discuss further. For now I am going to close this issue, though it is not resolved.

For the benefit of future readers, the reason runProcess doesn't run in the calling thread, is indeed as @OlivierSohn intimated, that a cloud haskell Process is essentially a managed thread, the lifecycle of which is bound to the internal infrastructure of the hosting cloud haskell node. Running a CH in the calling thread would amount to converting said thread into an Actor for the duration of the Process code which is executing, and that seems a somewhat odd semantic to follow from the caller's point of view.

Furthermore, I suspect the issue @OlivierSohn runs into is related to the fact that runProcess synchronising on an MVar in both the calling and spawned forkIO thread, which when you're stepping across OS thread boundaries as glfwPollEvents doubtless does, carried numerous caveats.

Synchronising cloud haskell code with websocket/webserver code in the way @OlivierSohn described above can be achieved using the distributed-process-client-server library, and is discussed at length in this ticket.

For a detailed break down of how we're going to be addressing usability and stability issues surrounding using Cloud Haskell in the next major release, see the discussion around separating actors and typed channels here, and the bigger picture discussion here.

To summarise the important bits from the ticket I mentioned above, here is some code snipped from that discussion:

data StmServer = StmServer { serverPid  :: ProcessId
                           , writerChan :: TQueue String
                           , readerChan :: TQueue String
                           }

We start out by defining a server handle, which is good practise for -cilent-server apps as per the docs/tutorials. We will use this to interact with the server process. Since we want to resolve it to a process (for monitoring) and in our test case, to kill it (once we're done), we add the relevant instances from distributed-process-extras to make that easy:

instance Resolvable StmServer where
  resolve = return . Just . serverPid

instance Killable StmServer where
  killProc StmServer{..} = kill serverPid
  exitProc StmServer{..} = exit serverPid

The client part of the interaction uses a new function exposed through the Client module, callSTM, which takes an arbitrary STM action for writing and another one for reading, and then executes them both, whilst monitoring the server to ensure we get a failure message if it crashes (so we don't block indefinitely). Currently we can't read and then write atomically (and I'm not sure we want to here), but there is a way to expose that if we want to.

The callSTM implementation is very simple, and relies on awaitResponse from -extras, which does the relevant monitoring for us...

callSTM :: forall s a b . (Addressable s)
         => s
         -> (a -> STM ())
         -> STM b
         -> a
         -> Process (Either ExitReason b)
callSTM server writeAction readAction input = do
  liftIO $ atomically $ writeAction input
  awaitResponse server [ matchSTM readAction (return . Right) ]

Back to our code then, we implement the client side of our API using this function, and use the handle to (a) ensure we have the relevant STM data available to us, and (b) ensure nobody accidentally passes an invalid ProcessId or some such:

echoStm :: StmServer -> String -> Process (Either ExitReason String)
echoStm StmServer{..} = callSTM serverPid
                                (writeTQueue writerChan)
                                (readTQueue  readerChan)

Now for our server implementation. We create the STM actions, which as you can see from the client code involves two TQueues, one for writing requests and a second for replies. You could easily replace these with TChan or TMVar if you wished, though I'd be cautious about using blocking cells if I were you. Anyway, the client and server APIs simply deal with STM a and don't regulate this at all.

Given our input and output channels, we wire them into the server using the new handleCallExternal API, which works very much like handleCall except that it takes two STM actions, one for reading and another for writing back the replies. Since these are expressed as STM a (roughly speaking), you can do whatever you like just as with the client portion of the code. This is where wrapping up your server capability into an isolated module and exposing it only via a handle becomes important. Later on, when we start looking at Task and other APIs, we will build on (and build new) capabilities that abstract this kind of detail away from the application developer.

Here's our server code now:

launchEchoServer :: Process StmServer
launchEchoServer = do
  (inQ, replyQ) <- liftIO $ do
    cIn <- newTQueueIO
    cOut <- newTQueueIO
    return (cIn, cOut)

  let procDef = statelessProcess {
                  apiHandlers = [
                    handleCallExternal
                      (readTQueue inQ)
                      (writeTQueue replyQ)
                      (\st (msg :: String) -> reply msg st)
                  ]
                }

  pid <- spawnLocal $ serve () (statelessInit Infinity) procDef
  return $ StmServer pid inQ replyQ

Those STM implementation details don't escape the lexical scope of the launchEchoServer function, which I feel is important here, to minimise leaking information that API consumers shouldn't have to care about.

Finally, the test case, which simply launches the server, calls it synchronously, and puts the reply/response into our result:

testExternalCall :: TestResult Bool -> Process ()
testExternalCall result = do
  let txt = "hello stm-call foo"
  srv <- launchEchoServer
  echoStm srv txt >>= stash result . (== Right txt)
  killProc srv "done"

So, there you have it.... I think callSTM is a neat way to encapsulate synchronous communication between non-CH clients and CH servers. For non-synchronised, and generally more nuanced cases, handleExternal should be general enough to support almost all other use-cases. And you can, of course, write code that allows the STM actions to be accessed and used arbitrarily from IO or anywhere else you can use STM.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants