You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- An expression asks the implementation to compute something. The computation yields a result, and may also have side effects-that is, it may affect the state of the program or the implementation in ways that are not directly part of
3
+
- An expression asks the implementation to compute something. The computation yields a result, and may also have side effectsthat is, it may affect the state of the program or the implementation in ways that are not directly part of
4
4
the result.
5
5
6
-
- For example, 3+4 is an expression that yields 7 as its result, and has no side effects, and `std::cout << "Hello, world!" << std::endl` is an expression that, as its side effect, writes Hello, world! on the standard output stream and ends the current line.
6
+
- For example, 3+4 is an expression that yields 7 as its result, and has no side effects, and `std::cout << "Hello, world!" << std::endl` is an expression that, as its side effect, writes Hello, world! On the standard output stream and ends the current line.
Copy file name to clipboardExpand all lines: books/concurrennt programming in erlang.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -1,21 +1,21 @@
1
1
# [Concurrent Programming in Erlang](https://erlang.org/download/erlang-book-part1.pdf)
2
2
3
-
- Erlang has a process-based model of concurrency with asynchron- ous message passing.
4
-
- The concurrency mechanisms in Erlang are light- weight, i.e. processes require little memory, and creating and deleting processes and message passing require little computational effort.
3
+
- Erlang has a process-based model of concurrency with asynchronous message passing.
4
+
- The concurrency mechanisms in Erlang are lightweight, i.e. processes require little memory, and creating and deleting processes and message passing require little computational effort.
5
5
6
6
- Erlang is a symbolic programming language with a real-time garbage collector
7
7
8
-
- The use of a patternmatching syntax, and the ‘single assignment’ property of Erlang variables, leads to clear, short and reliable programs.
8
+
- The use of a pattern-matching syntax, and the ‘single assignment’ property of Erlang variables, leads to clear, short, and reliable programs.
9
9
10
10
- Registered process which allows us to associate a name with a process.
11
11
12
-
- Erlang has primitives for multi-processing: spawn starts a parallel computation (called a process); send sends a message to a process; and receive receives a message from a process.
12
+
- Erlang has primitives for multi-processing: spawn starts a parallel computation (called a process); send sends a message to a process; and receive receives a message from a process.
13
13
14
-
- The syntax Pid! Msg is used to send a message.
14
+
- The syntax Pid! Msg is used to send a message.
15
15
16
16
- While we can think of send as sending a message and receive as receiving a message, a more accurate description would be to say that send sends a message to the mailbox of a process and that receive tries to remove a message from the mailbox of the current process.
17
17
- Receive is selective, that is to say, it takes the first message which matches one of the message patterns from a queue of messages waiting for the attention of the receiving process.
18
-
- If none of the receive patterns matches then the process is suspended until the next message is received unmatched messages are saved for later processing.
18
+
- If none of the received patterns matches then the process is suspended until the next message is received unmatched messages are saved for later processing.
19
19
20
20
- Instead of evaluating the function, however, and returning the result as in apply, spawn/3 creates a new concurrent process to evaluate the function and returns the Pid (process identifier) of the newly created process.
Copy file name to clipboardExpand all lines: books/design pattern - elements of reusable object oriented software.md
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -1,4 +1,4 @@
1
-
# Design Pattern - Elements of reusable objectoriented Software
1
+
# Design Pattern - Elements of reusable object-oriented Software
2
2
3
3
- Designing object-oriented software is hard, and designing reusable object-oriented software is even harder.
4
4
@@ -13,9 +13,9 @@
13
13
- The Structural class patterns use inheritance to compose classes, while the Structural object patterns describe ways to assemble objects
14
14
- The Behavioral class patterns use inheritance to describe algorithms and flow of control, whereas the Behavioral object patterns describe how a group of objects cooperate to perform a task that no single object can carry out alone
15
15
16
-
- Composite is often used with Iterator or Visitor.
16
+
- Composite is often used with an Iterator or Visitor.
17
17
- Some patterns are alternatives: Prototype is often an alternative to Abstract Factory.
18
18
- Some patterns result in similar designs even though the patterns have different intents. For example, the structure diagrams of Composite and Decorator are similar.
Copy file name to clipboardExpand all lines: books/distributed systems concept and design.md
+9-9Lines changed: 9 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -7,12 +7,12 @@
7
7
- lack of a global clock
8
8
- independent failures of components
9
9
10
-
- The largest online game, EVE Online, utilises a client-server architecture where a single copy of the state of the world is maintained on a centralized server and accessed by client programs running on players’ consoles or other devices.
10
+
- The largest online game, EVE Online, utilizes a client-server architecture where a single copy of the state of the world is maintained on a centralized server and accessed by client programs running on players’ consoles or other devices.
11
11
12
12
### System Models
13
13
14
14
- Computer clocks and timing events
15
-
- Each computer in a distributed system has its own internal clock, which can be used by local processes to obtain the value of the current time.
15
+
- Each computer in a distributed system has its internal clock, which can be used by local processes to obtain the value of the current time.
16
16
- Therefore two processes running on different computers can each associate timestamps with their events.
17
17
- However, even if the two processes read their clocks at the same time, their local clocks may supply different time values.
18
18
- This is because computer clocks drift from perfect time and, more importantly, their drift rates differ from one another.
@@ -27,15 +27,15 @@
27
27
- Chapter 5 presents the request-reply protocol, which supports RMI.
28
28
- Its failure characteristics depend on the failure characteristics of both processes and communication channels.
29
29
- The protocol can be built from either datagram or stream communication.
30
-
- The choice may be decided according to a consideration of simplicity of implementation, performance and reliability.
30
+
- The choice may be decided according to a consideration of simplicity of implementation, performance, and reliability.
31
31
32
32
- Chapter 17 presents the `two-phase commit` protocol for transactions.
33
33
- It is designed to complete in the face of well-defined failures of processes and communication channels.
34
34
35
35
- The algorithm that we describe here is a `distance vector` algorithm.
36
36
- This will provide a basis for the discussion in Section 3.4.3 of the link-state algorithm that has been used since 1979 as the main routing algorithm in the Internet.
37
37
38
-
- Routing in networks is an instance of the problem of path finding in graphs.
38
+
- Routing in networks is an instance of the problem of pathfinding in graphs.
39
39
- Bellman’s shortest path algorithm, published well before computer networks were developed [Bellman 1957], provides the basis for the distance vector method.
40
40
41
41
- Bellman’s method was converted into a distributed algorithm suitable for implementation in large networks by Ford and Fulkerson [1962], and protocols based on their work are often referred to as ‘Bellman–Ford’ protocols.
@@ -44,10 +44,10 @@
44
44
45
45
- The remote procedure call (RPC) approach extends the common programming abstraction of the procedure call to distributed environments, allowing a calling process to call a procedure in a remote node as if it is local.
46
46
47
-
- Remote method invocation (RMI) is similar to RPC but for distributed objects, with added benefits in terms of using object-oriented programming concepts in distributed systems and also extending the concept of an object reference to the global distributed environments, and allowing the use of object references as parameters in remote invocations.
47
+
- Remote method invocation (RMI) is similar to RPC but for distributed objects, with added benefits in terms of using object-oriented programming concepts in distributed systems and also extending the concept of an object reference to the globally distributed environments, and allowing the use of object references as parameters in remote invocations.
48
48
49
49
- Space uncoupling, in which the sender does not know or need to know the identity of the receiver(s), and vice versa.
50
-
- Because of this space uncoupling, the system developer has many degrees of freedom in dealing with change: participants (senders or receivers) can be replaced, updated, replicated or migrated.
50
+
- Because of this space uncoupling, the system developer has many degrees of freedom in dealing with change: participants (senders or receivers) can be replaced, updated, replicated, or migrated.
51
51
52
52
- Time uncoupling, in which the sender and receiver(s) can have independent lifetimes. In other words, the sender and receiver(s) do not need to exist at the same time to communicate.
53
53
- This has important benefits, for example, in more volatile environments where senders and receivers may come and go.
@@ -56,7 +56,7 @@
56
56
Process manager: Creation of and operations upon processes.
57
57
58
58
- A process is a unit of resource management, including an address space and one or more threads.
59
-
- Thread manager: Thread creation, synchronization and scheduling.
59
+
- Thread manager: Thread creation, synchronization, and scheduling.
60
60
61
61
- Threads are schedulable activities attached to processes and are fully described in Section 7.4.
62
62
@@ -67,14 +67,14 @@ Process manager: Creation of and operations upon processes.
67
67
68
68
- Memory manager: Management of physical and virtual memory. Section 7.4 and Section 7.5 describe the utilization of memory management techniques for efficient data copying and sharing.
69
69
70
-
- Supervisor: Dispatching of interrupts, system call traps and other exceptions; control of memory management unit and hardware caches; processor and floating-point unit register manipulations.
70
+
- Supervisor: Dispatching of interrupts, system call traps, and other exceptions; control of memory management unit and hardware caches; processor and floating-point unit register manipulations.
71
71
- This is known as the Hardware Abstraction Layer in Windows. The reader is referred to Bacon [2002] and Tanenbaum [2007] for a fuller description of the computer-dependent aspects of the kernel.
72
72
73
73
- When a process executes application code, it executes in a distinct user-level address space for that application; when the same process executes kernel code, it executes in the kernel’s address space.
74
74
75
75
- The process can safely transfer from a user-level address space to the kernel’s address space via an exception such as an interrupt or a system call trap – the invocation mechanism for resources managed by the kernel.
76
76
77
77
- A system call trap is implemented by a machine-level TRAP instruction, which puts the processor into supervisor mode and switches to the kernel address space.
78
-
- When the TRAP instruction is executed, as with any type of exception, the hardware forces the processor to execute a kernel-supplied handler function, in order that no process may gain illicit control of the hardware.
78
+
- When the TRAP instruction is executed, as with any type of exception, the hardware forces the processor to execute a kernel-supplied handler function, so that no process may gain illicit control of the hardware.
79
79
80
80
- Programs pay a price for protection. Switching between address spaces may take many processor cycles, and a system call trap is a more expensive operation than a simple procedure or method call. We shall see in Section 7.5.1 how these penalties factor into invocation costs.
Copy file name to clipboardExpand all lines: books/enterprise integration pattern - addison wisley.md
+1-1Lines changed: 1 addition & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@
6
6
- Messaging enables data or commands to be sent across the network using a “send and forget” approach where the caller sends the information and then goes on to other work while the information is transmitted by the messaging system. Optionally, the caller can later be notified of the result through a callback.
7
7
- Fundamental challenges:
8
8
- Networks are unreliable. Integration solutions have to transport data from one computer to another across networks. Compared to a process running on a single computer, distributed computing has to be prepared to deal with a much larger set of possible problems.
9
-
-Often times, two systems to be integrated are separated by continents and data between them has to travel through phone-lines, LAN segments, routers, switches, public networks, and satellite links. Each of these steps can cause delays or interruptions.
9
+
-Oftentimes, two systems to be integrated are separated by continents, and data between them has to travel through phonelines, LAN segments, routers, switches, public networks, and satellite links. Each of these steps can cause delays or interruptions.
10
10
- Networks are slow. Sending data across a network is multiple orders of magnitude slower than making a local method call. Designing a widely distributed solution the same way you would approach a single application could have disastrous performance implications.
11
11
- Any two applications are different. Integration solutions need to transmit information between systems that use different programming languages, operating platforms, and data formats.
12
12
- An integration solution needs to be able to interface with all these different technologies.
0 commit comments