Skip to content

Commit 558eb57

Browse files
grammer fix module book
1 parent 99e9ee4 commit 558eb57

11 files changed

+82
-96
lines changed

books/accelerated c++.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Accelerated C++
22

3-
- An expression asks the implementation to compute something. The computation yields a result, and may also have side effects-that is, it may affect the state of the program or the implementation in ways that are not directly part of
3+
- An expression asks the implementation to compute something. The computation yields a result, and may also have side effects that is, it may affect the state of the program or the implementation in ways that are not directly part of
44
the result.
55

6-
- For example, 3+4 is an expression that yields 7 as its result, and has no side effects, and `std::cout << "Hello, world!" << std::endl` is an expression that, as its side effect, writes Hello, world! on the standard output stream and ends the current line.
6+
- For example, 3+4 is an expression that yields 7 as its result, and has no side effects, and `std::cout << "Hello, world!" << std::endl` is an expression that, as its side effect, writes Hello, world! On the standard output stream and ends the current line.

books/concurrennt programming in erlang.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,21 +1,21 @@
11
# [Concurrent Programming in Erlang](https://erlang.org/download/erlang-book-part1.pdf)
22

3-
- Erlang has a process-based model of concurrency with asynchron- ous message passing.
4-
- The concurrency mechanisms in Erlang are light- weight, i.e. processes require little memory, and creating and deleting processes and message passing require little computational effort.
3+
- Erlang has a process-based model of concurrency with asynchronous message passing.
4+
- The concurrency mechanisms in Erlang are lightweight, i.e. processes require little memory, and creating and deleting processes and message passing require little computational effort.
55

66
- Erlang is a symbolic programming language with a real-time garbage collector
77

8-
- The use of a pattern matching syntax, and the ‘single assignment’ property of Erlang variables, leads to clear, short and reliable programs.
8+
- The use of a pattern-matching syntax, and the ‘single assignment’ property of Erlang variables, leads to clear, short, and reliable programs.
99

1010
- Registered process which allows us to associate a name with a process.
1111

12-
- Erlang has primitives for multi- processing: spawn starts a parallel computation (called a process); send sends a message to a process; and receive receives a message from a process.
12+
- Erlang has primitives for multi-processing: spawn starts a parallel computation (called a process); send sends a message to a process; and receive receives a message from a process.
1313

14-
- The syntax Pid ! Msg is used to send a message.
14+
- The syntax Pid! Msg is used to send a message.
1515

1616
- While we can think of send as sending a message and receive as receiving a message, a more accurate description would be to say that send sends a message to the mailbox of a process and that receive tries to remove a message from the mailbox of the current process.
1717
- Receive is selective, that is to say, it takes the first message which matches one of the message patterns from a queue of messages waiting for the attention of the receiving process.
18-
- If none of the receive patterns matches then the process is suspended until the next message is received unmatched messages are saved for later processing.
18+
- If none of the received patterns matches then the process is suspended until the next message is received unmatched messages are saved for later processing.
1919

2020
- Instead of evaluating the function, however, and returning the result as in apply, spawn/3 creates a new concurrent process to evaluate the function and returns the Pid (process identifier) of the newly created process.
2121

books/design pattern - elements of reusable object oriented software.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# Design Pattern - Elements of reusable object oriented Software
1+
# Design Pattern - Elements of reusable object-oriented Software
22

33
- Designing object-oriented software is hard, and designing reusable object-oriented software is even harder.
44

@@ -13,9 +13,9 @@
1313
- The Structural class patterns use inheritance to compose classes, while the Structural object patterns describe ways to assemble objects
1414
- The Behavioral class patterns use inheritance to describe algorithms and flow of control, whereas the Behavioral object patterns describe how a group of objects cooperate to perform a task that no single object can carry out alone
1515

16-
- Composite is often used with Iterator or Visitor.
16+
- Composite is often used with an Iterator or Visitor.
1717
- Some patterns are alternatives: Prototype is often an alternative to Abstract Factory.
1818
- Some patterns result in similar designs even though the patterns have different intents. For example, the structure diagrams of Composite and Decorator are similar.
1919

2020
![](./screen/Design%20Pattern.png)
21-
![](./screen/Design%20Pattern%20Relations.png)
21+
![](./screen/Design%20Pattern%20Relations.png)

books/distributed systems concept and design.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -7,12 +7,12 @@
77
- lack of a global clock
88
- independent failures of components
99

10-
- The largest online game, EVE Online, utilises a client-server architecture where a single copy of the state of the world is maintained on a centralized server and accessed by client programs running on players’ consoles or other devices.
10+
- The largest online game, EVE Online, utilizes a client-server architecture where a single copy of the state of the world is maintained on a centralized server and accessed by client programs running on players’ consoles or other devices.
1111

1212
### System Models
1313

1414
- Computer clocks and timing events
15-
- Each computer in a distributed system has its own internal clock, which can be used by local processes to obtain the value of the current time.
15+
- Each computer in a distributed system has its internal clock, which can be used by local processes to obtain the value of the current time.
1616
- Therefore two processes running on different computers can each associate timestamps with their events.
1717
- However, even if the two processes read their clocks at the same time, their local clocks may supply different time values.
1818
- This is because computer clocks drift from perfect time and, more importantly, their drift rates differ from one another.
@@ -27,15 +27,15 @@
2727
- Chapter 5 presents the request-reply protocol, which supports RMI.
2828
- Its failure characteristics depend on the failure characteristics of both processes and communication channels.
2929
- The protocol can be built from either datagram or stream communication.
30-
- The choice may be decided according to a consideration of simplicity of implementation, performance and reliability.
30+
- The choice may be decided according to a consideration of simplicity of implementation, performance, and reliability.
3131

3232
- Chapter 17 presents the `two-phase commit` protocol for transactions.
3333
- It is designed to complete in the face of well-defined failures of processes and communication channels.
3434

3535
- The algorithm that we describe here is a `distance vector` algorithm.
3636
- This will provide a basis for the discussion in Section 3.4.3 of the link-state algorithm that has been used since 1979 as the main routing algorithm in the Internet.
3737

38-
- Routing in networks is an instance of the problem of path finding in graphs.
38+
- Routing in networks is an instance of the problem of pathfinding in graphs.
3939
- Bellman’s shortest path algorithm, published well before computer networks were developed [Bellman 1957], provides the basis for the distance vector method.
4040

4141
- Bellman’s method was converted into a distributed algorithm suitable for implementation in large networks by Ford and Fulkerson [1962], and protocols based on their work are often referred to as ‘Bellman–Ford’ protocols.
@@ -44,10 +44,10 @@
4444

4545
- The remote procedure call (RPC) approach extends the common programming abstraction of the procedure call to distributed environments, allowing a calling process to call a procedure in a remote node as if it is local.
4646

47-
- Remote method invocation (RMI) is similar to RPC but for distributed objects, with added benefits in terms of using object-oriented programming concepts in distributed systems and also extending the concept of an object reference to the global distributed environments, and allowing the use of object references as parameters in remote invocations.
47+
- Remote method invocation (RMI) is similar to RPC but for distributed objects, with added benefits in terms of using object-oriented programming concepts in distributed systems and also extending the concept of an object reference to the globally distributed environments, and allowing the use of object references as parameters in remote invocations.
4848

4949
- Space uncoupling, in which the sender does not know or need to know the identity of the receiver(s), and vice versa.
50-
- Because of this space uncoupling, the system developer has many degrees of freedom in dealing with change: participants (senders or receivers) can be replaced, updated, replicated or migrated.
50+
- Because of this space uncoupling, the system developer has many degrees of freedom in dealing with change: participants (senders or receivers) can be replaced, updated, replicated, or migrated.
5151

5252
- Time uncoupling, in which the sender and receiver(s) can have independent lifetimes. In other words, the sender and receiver(s) do not need to exist at the same time to communicate.
5353
- This has important benefits, for example, in more volatile environments where senders and receivers may come and go.
@@ -56,7 +56,7 @@
5656
Process manager: Creation of and operations upon processes.
5757

5858
- A process is a unit of resource management, including an address space and one or more threads.
59-
- Thread manager: Thread creation, synchronization and scheduling.
59+
- Thread manager: Thread creation, synchronization, and scheduling.
6060

6161
- Threads are schedulable activities attached to processes and are fully described in Section 7.4.
6262

@@ -67,14 +67,14 @@ Process manager: Creation of and operations upon processes.
6767

6868
- Memory manager: Management of physical and virtual memory. Section 7.4 and Section 7.5 describe the utilization of memory management techniques for efficient data copying and sharing.
6969

70-
- Supervisor: Dispatching of interrupts, system call traps and other exceptions; control of memory management unit and hardware caches; processor and floating-point unit register manipulations.
70+
- Supervisor: Dispatching of interrupts, system call traps, and other exceptions; control of memory management unit and hardware caches; processor and floating-point unit register manipulations.
7171
- This is known as the Hardware Abstraction Layer in Windows. The reader is referred to Bacon [2002] and Tanenbaum [2007] for a fuller description of the computer-dependent aspects of the kernel.
7272

7373
- When a process executes application code, it executes in a distinct user-level address space for that application; when the same process executes kernel code, it executes in the kernel’s address space.
7474

7575
- The process can safely transfer from a user-level address space to the kernel’s address space via an exception such as an interrupt or a system call trap – the invocation mechanism for resources managed by the kernel.
7676

7777
- A system call trap is implemented by a machine-level TRAP instruction, which puts the processor into supervisor mode and switches to the kernel address space.
78-
- When the TRAP instruction is executed, as with any type of exception, the hardware forces the processor to execute a kernel-supplied handler function, in order that no process may gain illicit control of the hardware.
78+
- When the TRAP instruction is executed, as with any type of exception, the hardware forces the processor to execute a kernel-supplied handler function, so that no process may gain illicit control of the hardware.
7979

8080
- Programs pay a price for protection. Switching between address spaces may take many processor cycles, and a system call trap is a more expensive operation than a simple procedure or method call. We shall see in Section 7.5.1 how these penalties factor into invocation costs.

books/enterprise integration pattern - addison wisley.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
- Messaging enables data or commands to be sent across the network using a “send and forget” approach where the caller sends the information and then goes on to other work while the information is transmitted by the messaging system. Optionally, the caller can later be notified of the result through a callback.
77
- Fundamental challenges:
88
- Networks are unreliable. Integration solutions have to transport data from one computer to another across networks. Compared to a process running on a single computer, distributed computing has to be prepared to deal with a much larger set of possible problems.
9-
- Often times, two systems to be integrated are separated by continents and data between them has to travel through phone-lines, LAN segments, routers, switches, public networks, and satellite links. Each of these steps can cause delays or interruptions.
9+
- Oftentimes, two systems to be integrated are separated by continents, and data between them has to travel through phone lines, LAN segments, routers, switches, public networks, and satellite links. Each of these steps can cause delays or interruptions.
1010
- Networks are slow. Sending data across a network is multiple orders of magnitude slower than making a local method call. Designing a widely distributed solution the same way you would approach a single application could have disastrous performance implications.
1111
- Any two applications are different. Integration solutions need to transmit information between systems that use different programming languages, operating platforms, and data formats.
1212
- An integration solution needs to be able to interface with all these different technologies.

0 commit comments

Comments
 (0)