Skip to content

Commit 647046e

Browse files
committed
整理第六章的英文原版到MD
1 parent 6362503 commit 647046e

20 files changed

+1118
-0
lines changed

06_ItemReaders_ItemWriters/613.md

+4
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# 6.13 Creating Custom ItemReaders and ItemWriters #
2+
3+
4+
So far in this chapter the basic contracts that exist for reading and writing in Spring Batch and some common implementations have been discussed. However, these are all fairly generic, and there are many potential scenarios that may not be covered by out of the box implementations. This section will show, using a simple example, how to create a custom *ItemReader* and *ItemWriter* implementation and implement their contracts correctly. The *ItemReader* will also implement *ItemStream*, in order to illustrate how to make a reader or writer restartable.

06_ItemReaders_ItemWriters/613_1.md

+99
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
## 6.13.1 Custom ItemReader Example ##
2+
3+
For the purpose of this example, a simple *ItemReader* implementation that reads from a provided list will be created. We'll start out by implementing the most basic contract of *ItemReader*, read:
4+
5+
public class CustomItemReader<T> implements ItemReader<T>{
6+
7+
List<T> items;
8+
9+
public CustomItemReader(List<T> items) {
10+
this.items = items;
11+
}
12+
13+
public T read() throws Exception, UnexpectedInputException,
14+
NoWorkFoundException, ParseException {
15+
16+
if (!items.isEmpty()) {
17+
return items.remove(0);
18+
}
19+
return null;
20+
}
21+
}
22+
23+
This very simple class takes a list of items, and returns them one at a time, removing each from the list. When the list is empty, it returns null, thus satisfying the most basic requirements of an *ItemReader*, as illustrated below:
24+
25+
List<String> items = new ArrayList<String>();
26+
items.add("1");
27+
items.add("2");
28+
items.add("3");
29+
30+
ItemReader itemReader = new CustomItemReader<String>(items);
31+
assertEquals("1", itemReader.read());
32+
assertEquals("2", itemReader.read());
33+
assertEquals("3", itemReader.read());
34+
assertNull(itemReader.read());
35+
36+
37+
**Making the *ItemReader* Restartable**
38+
39+
The final challenge now is to make the *ItemReader* restartable. Currently, if the power goes out, and processing begins again, the *ItemReader* must start at the beginning. This is actually valid in many scenarios, but it is sometimes preferable that a batch job starts where it left off. The key discriminant is often whether the reader is stateful or stateless. A stateless reader does not need to worry about restartability, but a stateful one has to try and reconstitute its last known state on restart. For this reason, we recommend that you keep custom readers stateless if possible, so you don't have to worry about restartability.
40+
41+
If you do need to store state, then the ItemStream interface should be used:
42+
43+
public class CustomItemReader<T> implements ItemReader<T>, ItemStream {
44+
45+
List<T> items;
46+
int currentIndex = 0;
47+
private static final String CURRENT_INDEX = "current.index";
48+
49+
public CustomItemReader(List<T> items) {
50+
this.items = items;
51+
}
52+
53+
public T read() throws Exception, UnexpectedInputException,
54+
ParseException {
55+
56+
if (currentIndex < items.size()) {
57+
return items.get(currentIndex++);
58+
}
59+
60+
return null;
61+
}
62+
63+
public void open(ExecutionContext executionContext) throws ItemStreamException {
64+
if(executionContext.containsKey(CURRENT_INDEX)){
65+
currentIndex = new Long(executionContext.getLong(CURRENT_INDEX)).intValue();
66+
}
67+
else{
68+
currentIndex = 0;
69+
}
70+
}
71+
72+
public void update(ExecutionContext executionContext) throws ItemStreamException {
73+
executionContext.putLong(CURRENT_INDEX, new Long(currentIndex).longValue());
74+
}
75+
76+
public void close() throws ItemStreamException {}
77+
}
78+
79+
On each call to the *ItemStream* update method, the current index of the *ItemReader* will be stored in the provided *ExecutionContext* with a key of 'current.index'. When the *ItemStream* open method is called, the *ExecutionContext* is checked to see if it contains an entry with that key. If the key is found, then the current index is moved to that location. This is a fairly trivial example, but it still meets the general contract:
80+
81+
ExecutionContext executionContext = new ExecutionContext();
82+
((ItemStream)itemReader).open(executionContext);
83+
assertEquals("1", itemReader.read());
84+
((ItemStream)itemReader).update(executionContext);
85+
86+
List<String> items = new ArrayList<String>();
87+
items.add("1");
88+
items.add("2");
89+
items.add("3");
90+
itemReader = new CustomItemReader<String>(items);
91+
92+
((ItemStream)itemReader).open(executionContext);
93+
assertEquals("2", itemReader.read());
94+
95+
96+
Most ItemReaders have much more sophisticated restart logic. The *JdbcCursorItemReader*, for example, stores the row id of the last processed row in the Cursor.
97+
98+
It is also worth noting that the key used within the *ExecutionContext* should not be trivial. That is because the same *ExecutionContext* is used for all *ItemStreams* within a *Step*. In most cases, simply prepending the key with the class name should be enough to guarantee uniqueness. However, in the rare cases where two of the same type of *ItemStream* are used in the same step (which can happen if two files are need for output) then a more unique name will be needed. For this reason, many of the Spring Batch *ItemReader* and ItemWriter implementations have a *setName()* property that allows this key name to be overridden.
99+

06_ItemReaders_ItemWriters/613_2.md

+22
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
## 6.13.2 Custom ItemWriter Example ##
2+
3+
Implementing a Custom *ItemWriter* is similar in many ways to the *ItemReader* example above, but differs in enough ways as to warrant its own example. However, adding restartability is essentially the same, so it won't be covered in this example. As with the *ItemReader* example, a *List* will be used in order to keep the example as simple as possible:
4+
5+
public class CustomItemWriter<T> implements ItemWriter<T> {
6+
7+
List<T> output = TransactionAwareProxyFactory.createTransactionalList();
8+
9+
public void write(List<? extends T> items) throws Exception {
10+
output.addAll(items);
11+
}
12+
13+
public List<T> getOutput() {
14+
return output;
15+
}
16+
}
17+
18+
**Making the *ItemWriter* Restartable**
19+
20+
To make the *ItemWriter* restartable we would follow the same process as for the *ItemReader*, adding and implementing the *ItemStream* interface to synchronize the execution context. In the example we might have to count the number of items processed and add that as a footer record. If we needed to do that, we could implement *ItemStream* in our *ItemWriter* so that the counter was reconstituted from the execution context if the stream was re-opened.
21+
22+
In many realistic cases, custom ItemWriters also delegate to another writer that itself is restartable (e.g. when writing to a file), or else it writes to a transactional resource so doesn't need to be restartable because it is stateless. When you have a stateful writer you should probably also be sure to implement *ItemStream* as well as *ItemWriter*. Remember also that the client of the writer needs to be aware of the *ItemStream*, so you may need to register it as a stream in the configuration xml.

06_ItemReaders_ItemWriters/66.md

+3
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
6.6 Flat Files
2+
3+
One of the most common mechanisms for interchanging bulk data has always been the flat file. Unlike XML, which has an agreed upon standard for defining how it is structured (XSD), anyone reading a flat file must understand ahead of time exactly how the file is structured. In general, all flat files fall into two types: Delimited and Fixed Length. Delimited files are those in which fields are separated by a delimiter, such as a comma. Fixed Length files have fields that are a set length.

06_ItemReaders_ItemWriters/66_1.md

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
## 6.6.1 The FieldSet ##
2+
3+
When working with flat files in Spring Batch, regardless of whether it is for input or output, one of the most important classes is the *FieldSet*. Many architectures and libraries contain abstractions for helping you read in from a file, but they usually return a String or an array of Strings. This really only gets you halfway there. A *FieldSet* is Spring Batch’s abstraction for enabling the binding of fields from a file resource. It allows developers to work with file input in much the same way as they would work with database input. A *FieldSet* is conceptually very similar to a Jdbc *ResultSet*. FieldSets only require one argument, a *String* array of tokens. Optionally, you can also configure in the names of the fields so that the fields may be accessed either by index or name as patterned after *ResultSet*:
4+
5+
String[] tokens = new String[]{"foo", "1", "true"};
6+
FieldSet fs = new DefaultFieldSet(tokens);
7+
String name = fs.readString(0);
8+
int value = fs.readInt(1);
9+
boolean booleanValue = fs.readBoolean(2);
10+
11+
There are many more options on the *FieldSet* interface, such as *Date*, long, *BigDecimal*, etc. The biggest advantage of the FieldSet is that it provides consistent parsing of flat file input. Rather than each batch job parsing differently in potentially unexpected ways, it can be consistent, both when handling errors caused by a format exception, or when doing simple data conversions.

0 commit comments

Comments
 (0)