2,209,477 events, 1,244,577 push events, 1,809,629 commit messages, 99,369,259 characters
im sorry andante im fucking up ur shit oh god oh fuck
you know what fuck you ungoblins your horde (#218)
step3: fix algorithm and data structure.
-
first, simplest: let's use FNV1 (it is not THE fastest hash, but it is simple, portable, and better distributed) -3% speed up, less probes
-
second, we need only one incorrect hash value - 0. gives us -31% on miss, -0.5% on hit. That is very good change, for several reasons: a) it is faster to compare with zero, b) we have better hashes (as we don't waste one bit)
-
third, let's use separate hashes, keys, and values to different arrays -21% on hit and -23% on miss. Keep in mind, we have absolutely same amount of probes, which is like 2.5 times higher than in required in SC, and yet our performance is almost same in OA (we still loose 12%)!
Why this helps performance, is obvious. It is just much more cache friendly to iterate on smaller array. Which is not that obvious, is that it is actually way better structure, and it allows to save more memory. Keys, values, and hashes all have different alignment (in real life scenario).
Original implementation was wasting 4 bytes on aligning int to alignment of string (which is >=8 on 64 bit platforms). so we waster our mem bandwidth on downloading 4 bytes of bullshit data, AND wasted memory. that alone saves us 128kb of memory (on ~16384 capacity). We still loose ~2.82 times on MISS. One can argue that keys should be in same array as value (as when we finally make a hit, we usually should return the data of value and we don't here). Well, I didn't made that test and data set, and the whole point of this analysis is that in any real scenario OA is (probably) a better solution. If we would need to return the value which is different from key and we get many hits, then fine (and in this particular test that wouldn't matter much).
-
Fourth, Load Factor IS NOT an actual production/platform requirement!
It is just part of a specific algorithm. The actual requirement (which it represents) is memory usage. So, if we want compare apples to apples, we should compare performance of HT on same MEMORY footprint, not on the same LF (which, again, doesn't matter). With that change we would loose by 4% on HIT and loose 4% on MISS.
step4: Cache effiency
On last step we almost got on par with SC on a ~same memory usage (and LF). But we are sill loosing. Or do we...?
That test was extremely benefical to SC, simply the way we do it. All data set completely fits in L2 cache (several times probably), even with a strings contents, and we do not do anything else, so it is in L2 cache! Most likely, even L1 cache isnt that useless, as 'buckets' array fits there. While it is well-known fact, that in real-life scenarios the cache will be cold :) You usually will be doing something else, besides attack your HT, and in other threads as well, so even small HT will be usually 'cold'. So, let's prune cache (L2 cache). The test become painfully slow, but even with same LF we already win over SC in both HIT (by 6%) and MISS (by 27%)! And on same memory usage win is 26% on HIT, and 37.5% on MISS.
Even without completely pruning our cache, just by increasing working set/HT size to 256k (and so, memory usage to 16mb+, which is still keeps L2 cache somewhat helping), we get on a same mem usage on HIT 2% win, on MISS 36% win
The funny fact, now MISS is actually where we win more (which is absolutely as it should be, as the biggest win of OA is cache/prefetch effiecency on scan).
FUCK YOU JS I AM LITERALLY TRANSCENDED PAST YOUR LEVEL
[modular][ready] formal wear for medical and engineering in loadout, gas masks unadded, but coded (#5156)
-
for i just threw out the love of my dreams
-
Update uniform.dm
-
fuck
-
biker
-
Update glasses.dm
-
I don't want any of my sprites in Skyrat. I appreciate you asking though.
-
Revert "I don't want any of my sprites in Skyrat. I appreciate you asking though."
This reverts commit d980baf7ada95a04f74870103b2ee09ede67dcda.
- Revert "Revert "I don't want any of my sprites in Skyrat. I appreciate you asking though.""
This reverts commit ad42656117f90747c0aaf7ade3614a16d67fed4b.
-
as requested
-
casual cargo gear
Co-authored-by: louiseedwardstuart
deleted everything because im fucking stupid and an idiot! DONT HIRE ME!
catching all 7 dirty words
Modified word_list.py to catch all of George Carlin's 7 dirty words you can't say on television
- the words being shit, piss, fuck, cunt, cocksucker, motherfucker and tits.
Add files via upload
Iris Neural Network Introduction Artifical Neural Networks are quickly becoming one of the most popular and widely used mechanisms in Machine Learning and Data Analysis. In the last number of years, numerous libraries and software has been developed to equip programmers with a set of tools for modeling and analysing data in order to recognise patterns and make predictions using large data sets. In today's age of Big Data it is important to try make sense of all of the data we have in society. This could range from social media pattern recognitions from anything to finance and economic trends. The reality is that today we have more data in existence than ever before and it growing at a vast and exponential rate.
Artifical Neural Networks aim to mimic and replicate the neurons of a human brain and using the power of the complex mathematical functions allow us to process and model data in such a way that we can form rational assumptions on a given data set.
Given the sheer amount of data out there it is important to note that data we may analyse is often subject to human error and may not always hold a valid essense of truth. For the purpose of this example we will take a look at the Iris Data Set.
More information on the data set can be found on the link provided above or on the front page README of this repository.
Throughout the notebook we aim to build an Artifical Neural Network capable of making predictions of species of Iris Flowers using Keras - Keras is a high-level neural networks API, written in Python and capable of running on top of Tensorflow.
So without further ado, lets get started!
Inputs and Outputs Data Investigation and Classification Before trying to create a model for our Neural Network we first need to investigate our data and determine what will be the inputs and what will be our outputs. The CSV file provided contains 5 columns with:
Sepal Length Sepal Width Petal Length Petal Width Species Judging by the fact that we are trying to make predictions we must split our data set into sets of:
Inputs - Numerical data values Outputs - Classification of Iris Flower species Now that we have the data set loaded we can extract the data we need into appropriate data sets in preparation for training and testing our Model. Categorical Classification Here we are using the Keras utility to_categorical() to allow us to turn our output categories into binary class matrices. This is often refered to as "One-Hot" encoding. This is for use with categorical_crossentropy and classification of our species (setosa, versicolor and virginica).
Each Species will be represented as a binary class matrix.
Setosa [1 0 0] Versicolor [0 1 0] Virginica [0 0 1]
Creating a Model Below we can see an example of a how a Neural Network can be visualized. Every Neural Network is made up of these three main consituents.
Input Layer
We are trying to create a model that will look somewhat similar to below:
Activation Functions An Activation Function in a Neural Network defines the output of a given node given its input or set of inputs. Above we applying two activation functions in separate layers.
Sigmoid A sigmoid function is a mathematical function having an "S" shaped curve (sigmoid curve). Often, sigmoid function refers to the special case of the logistic function shown in the first figure and defined by the formula:
sigmoid
Below we see a plot of the "S" shaped curved or "Sigmoid Curve". It's usage in neural network are:
Activation function that transform linear inputs to nonlinear outputs. Bound output to between 0 and 1 so that it can be interpreted as a probability. Make computation easier than arbitrary activation functions. Softmax Softmax regression (or multinomial logistic regression) is a generalization of logistic regression to the case where we want to handle multiple classes.
Here are using Softmax to allow us to let our data flow through the hidden layers and essentially end up as one of our defined classes:
Setosa Versicolor Virginica Configure the Model for training and fit the training data We configure the Model using the compile() function defined in the Keras Model API. We define an Optimizer, a Loss function and an additional metric - accuracy.
So before we can use our Model for we must first train it. Using the training data subset which we extracted before we can now fit it to our Model.
The goal here is for the Optimizer to essentially minimize the Loss.
We fit the model passing our inputs and our expected outputs and train it across 100 "Epochs" or training cycles. On each iteration we improve the improve the accuracy and miniize the loss. Making predictions using the Model To make predictions using our Model we must first prepare the input data to be what the model expects. Here we use a couple of Numpy functions such as around() and expand_dims() to prepare the input data for prediction.
We can then pass get our prediction as a String value from outputs_vals which defined earlier in the Notebook. Saving and Loading the Model Keras offers a very simplistic way to save and load your model.
We can easily reload the model in another script using model = load_model("path_to_model.h5")
Create Magical Number.cpp
Your friend loves magic and he has coined a new term - "Magical number". To perform his magic, he needs that Magic number. There are N number of people in the magic show, seated according to their ages in an ascending order. Magical number is that seat no. where the person has the same age as that of the given seat number. Help your friend in finding out that "Magical number" Output Status: Correct Answer.Correct Answer Execution Time:0.01
sounds for da slime and da ghosties
What the fuck did you just fucking say about me, you little bitch? I'll have you know I graduated top of my class in the Navy Seals, and I've been involved in numerous secret raids on Al-Quaeda, and I have over 300 confirmed kills. I am trained in gorilla warfare and I'm the top sniper in the entire US armed forces. You are nothing to me but just another target. I will wipe you the fuck out with precision the likes of which has never been seen before on this Earth, mark my fucking words. You think you can get away with saying that shit to me over the Internet? Think again, fucker. As we speak I am contacting my secret network of spies across the USA and your IP is being traced right now so you better prepare for the storm, maggot. The storm that wipes out the pathetic little thing you call your life. You're fucking dead, kid. I can be anywhere, anytime, and I can kill you in over seven hundred ways, and that's just with my bare hands. Not only am I extensively trained in unarmed combat, but I have access to the entire arsenal of the United States Marine Corps and I will use it to its full extent to wipe your miserable ass off the face of the continent, you little shit. If only you could have known what unholy retribution your little "clever" comment was about to bring down upon you, maybe you would have held your fucking tongue. But you couldn't, you didn't, and now you're paying the price, you goddamn idiot. I will shit fury all over you and you will drown in it. You're fucking dead, kiddo.
Revert "i know you can just undo, but fuck you"
This reverts commit 2797d49bafffcb2cad6d8e3cd3c6d57934545628.
Add PomPom
Fuck CoolStar, all my homies love Litten
Smooth supernorn walls (#4468)
- Smooth supernorn walls
- kk
- fuck this copy paste shit
- fix hardcoded bullshit
"10:55am. I am having inspiration again. I guess I am not going to program again. Instead, let me bring these thoughts to a conclusion.
The backprop issues I've talked about in the reversiblity paper are not wrong.
Optimizing 1000 - y
and 10 - 10 * 10 * y
should be the same thing. The gradient would certainly be the same when y = 0
. Yet there is something missing.
That something are optimization preferences.
11:05am. Seen from the perspective of NNs, rather than just the gradients, I should in fact take account of the weight norms in the upper layers. In addition to propagating the gradients, I should propagate the optimization preference.
11:10am. I've been thinking how I would design my own learning rules and...I don't think that the whole idea of differentiable manifolds needs to be swept aside. In fact imposing a manifold is the first thing you would do for efficiency. After that, there is nothing wrong with the backprop local rules either.
They do in fact give you the gradient.
But the algorithm is blind to the structure in some sense. Getting a gradient of 64 at y
when optimizing y
and when optimizing a*b*y
might be completely different thing. The gradient does in fact tell you how much the target will change when you move y
around. There is no doubt about that here. But it does not tell you by how much the y should be moved.
In the first case, if you are optimizing something like 64 - y
, then it makes complexte sense to move in the step of 64. But if you are optimizing 1 - a * b * y
, then it does not make any sense at all. The thing has to slow down by necessity.
On these toy examples, the degree it should slow down is exactly obvious. Counter the multiplication and turn it into a division.
11:25am. Densenets and resnets were a big innovation allowing deep hierarchies to be trained, but I do not think they are right. Constantly baraging the net with gradients won't allow invariants to form even though it might deal with the vanishing gradient problem.
Seriously considering my local optimization ideas might be the way to go.
Yesterday I made my determination to make my goal studying these deep nets. That was right.
But I've also been thinking about differentiable manifolds and comparing them to the bayesian idea, and manifolds are not wrong.
There is something I've figured out about the Gassian priors. Rather than having the weights decay, which struck me as intrinsically wrong, why don't I in fact put a manifold on the weights?
https://en.wikipedia.org/wiki/Gaussian_function
I mean, look at the bell curve here. This could serve as the local error function for the weight updates. And it would have the behavior of inhibiting movement when the weight gets too far from its mean. This should be the way to do it. I can't imagine neurons in the brain just decaying.
I wanted to find a way to put the priors at the buffer shaping level, but that is not right. Instead why don't I substitute probability priors with manifold shaping. In fact shaping the manifold by changing the replay buffer probabilities is the purpose of that as well.
11:50am. I am just lying here thinking - it is absolutely insane to use the backprop rules as they are.
Yeah, I need to modify them. Given the structure of the network, there should be better rules out there. Backprop is a good starting point when it comes to looking for them though. I already have to fiddle with init and the learning rates. Why not go the whole length?
Let me give that statistics book a try. I guess I'll explore today as well.
But when I strike, I will hit it right.
It is all about the rules. If I can grasp the right rules, I can attain everything. I should have courage and mix it up, much like nature did.
I won't throw out the idea of manifolds, but I should throw out the idea of blindly following the rules that backprop gives me.
12:30pm.
Necessity is obvious. To show the sufficiency, let E be a collection of subsets of E that is both a p-system and a d-system. First, E is closed under complements: A ∈ E ⇒ E \ A ∈ E, since E ∈ E and A ⊂ E and E is a d-system. Second, it is closed under unions: A, B ∈ E ⇒ A ∪ B ∈ E, because A∪B = (Ac ∩Bc)c and E is closed under complements (as shown) and under intersections by the hypothesis that it is a p-system. Finally, this closure extends to countable unions: if (An) ⊂ E, then B1 = A1 and B2 = A1 ∪ A2 and so on belong to E by the preceding step, and Bn � n An, which together imply that � n An ∈ E since E is a d-system by hypothesis
Ugh, forget it. I am not good at visualizing these proofs in my mind unless there is a need for it.
1:40pm. Done with breakfast.
I am still thinking about it. Terror, trepidation, fear, loneliness, I've been going through those feelings in the past few days, but today it seems they are abating and in the depths of my heart I am starting to feel determination.
To get to this point, wasn't that the purpose of making Spiral? Wasn't that the purpose of the last six years of effort?
I've lived my life by my own rules. So it only goes to reason, that I'll do ML according my own rules as well.
I know quite a lot. I know most of the important things in deep learning. I've studied the subject to death. But it is not like I am good at it.
But I could be.
My own rules might turn out to be good like I hope, or they might turn out to be bad.
What is important is that I keep watch and think. If I make my own neurolab, and keep watch of what I am doing, then the second half of my journey will begin.
Without much fanfare, the first part of it ended when I finished the tabular player.
The only question I need to ask myself is whether I want to take responsiblity for my own vision, or do I want to run away into the false safety of society?
I've failed before, and I will fail in the future. That does not matter.
1:50pm. If I continue going on this road, I think that the majority of heavy lifting in programming is behind me. I won't have to get my hands so extremely dirty in the future. I've been a hard labourer, programming wise in the past.
Well, it is not like the effort will be light, but pats the box next to me increasingly the workload will be done by my little guy.
1:55pm. I can do it. I can't figure out learning, but everything points that I should be able to take my current insights and significantly improve on it.
Though I won't use it right now, I already understand that while backprop's local rules might not be strictly composable, only weakly so, locally optimizing processes are strongly composable. If I keep going in this direction at some point I will find out the secret of why the brain has such strong generative capabilities.
There is something about the idea that the upper layers should generate targets instead of propagating raw gradients. The idea has potential.
I thought that before, but then I got scared off without really investigating the idea.
2pm. It does not have to be like this. I can throw in the towel here, or I can start my ascent to transcendence.
Maybe current methods are no good and cannot be improved with just hacking and zero theory. Maybe the Singularity will not happen in my lifetime. Maybe I'd be better off getting a job instead and living a normal life.
2:05pm. But wouldn't it be wonderful if I lived believing in it? Wouldn't it be wonderful if my ideas succeed and I made a superhuman poker agent? Wouldn't it be wonderful (for me) if I took it on a rampage and made my millions that way?
Wouldn't it be wonderful if in the future my vision of Spiral bore fruit and I got access to those AI chips?
Wouldn't it be wonderful if I managed to take advantage of those increased capabilities to develop generally intelligent agents? Wouldn't it be wonderful to exchange my will and their power?
2:15pm. This won't happen unless I work on it.
But I can't treat it as a chore. I can't treat it as some drudgery that I have to do.
What I will get doing here is stacking real benefits together to get something great.
Trying out my various ideas and seeing them work would be an enormous benefit. It wouldn't be money by itself, but it would in fact be something better.
I need to believe that is better. I need to believe that the programming skills I've attained, and the ML skills that I will attain are the real value of my journey.
If I did, I'd never have to endure loneliness again.
2:20pm. I should pursue my goal with zeal. Make the agent, get the money, resolve the health issue with mom, go into the beyond right into the maw of tomorrow.
It does not need to be fancy or complicated.
All I need is the right rules, and the right hardware.
And the belief in myself and the determination to follow my own path.
2:50pm. Let me not push myself. Why don't I take a break for a few days? How about I just chill without restrictions and beating myself up over not programming every hour of the day?
The surest way to get sick of work is to constantly keep doing it. Normal people take breaks two days a week for a reason. Only I myself am plugged into it 24/7.
2:55pm. My ideas are all significant improvements. After much effort, I figured out completely how to make the networks reward invariant. And now my ideas have been extended to normalizing the weight gradients in the hidden layers.
All this will amount in benefits. But even though I might have found burried in the ground, I own own it until I actually dig it out and take it for myself.
In 2018 I was particularly hard for KFAC, but though KFAC can do reward normalization, it is very much a reactive thing. It needed an identity matrix of about 0.1. It made thing a little better, but the keyword is little. It is very likely that the net could blow up before the covariance matrices get estimated. It could blow up because those 0.1 raw gradients as well.
In contrast to that, my ideas are both much simpler, are predictive rather than reactive, and perfectly resolve the issues involved in reward variance.
The reasoning is impecable, but I need to actually go and try it out.
3:10pm. Yeah, let me have a vacation. Reading novels when I am not dead tired from work, watching anime, playing games. Just today, and maybe tomorrow. And then I will do the monthly review and jump into this with all my zeal.
https://www.youtube.com/watch?v=S27pHKBEp30 LSTM is dead. Long Live Transformers!
Let me do some random things like watching this transformer video. I keep looking into what transformers are and promptly forgetting it afterwards.
3:25pm. https://youtu.be/S27pHKBEp30?t=515
Transfer learning never really worked on these LSTM models.
Huh, this is new to me. I never knew about that.
https://youtu.be/S27pHKBEp30?t=1053
Hnnnmmm, now I remember. I should consider using this for poker instead of regular RNNs. But regular RNNs should be more straightforward to deal with.
Obviously, feedforward nets would be the easiest, so I should go with that first. If I use a binary representation for stack sizes, I'll be able to do very large sequences.
3:50pm. https://www.youtube.com/watch?v=wFsI2WqUfdA DeepMind x UCL | Deep Learning Lectures | 9/12 | Generative Adversarial Networks
It has been a while since I watched any deep learning lectures. I used to do this a lot.
https://www.youtube.com/channel/UCP7jMXSY2xbc3KCAE0MHQ-A
Oh, Deepmind has a lot here. There is even a RL course.
...Hmmm, maybe I'll take a look at the advanced topics. Vanilla single agent RL is useless. They should be teaching CFR.
https://youtu.be/wFsI2WqUfdA?t=1352
I only picked this video up on a whim, but it is pretty good. I am starting to understand why GANs are unstable.
4:40pm. https://youtu.be/wFsI2WqUfdA?t=2088
I guess it was not because of that.
7pm. https://youtu.be/7Pcvdo4EJeo?t=458
I am still watching these vids.
This non GAN stuff is better than I thought. The previous lecture was getting boring so I skipped it. I am actually pretty tired of these lectures, but they are quite good.
https://youtu.be/7Pcvdo4EJeo?t=4091
Amortized variational inference is interesting. I saw this once before in the context of probabilistic programming. It was by that Wood guy, I forgot what his first name was. I think it was Frank. He published that paper with Baydin.
https://youtu.be/7Pcvdo4EJeo?t=4224
I wish he talked about ensembling, but I guess that won't happen here.
8:55pm. https://youtu.be/7Pcvdo4EJeo?t=5145
Ah wait, didn't Wood say that amortized inference was exact inference? This lecture did not mention that.
9:05pm. I should watch the unsupervised learning lecture, but I am too tired right now and my stress is sky high.
I am thinking, and it is not necessarily a given that local optimization will give incorrect targets for GAN training. I mean, if it can give exact gradient, who is to say that local training will degenerate to autoencoder training. Maybe it will give unrealistic input targets that average out to the right thing rather than the average pixel like in regular autoencoders.
9:10pm. Hack the backpropagation rules, replace global with local training, shatter the dogma and the dominance of the current gradient based training! Make my own manifold! Shape the replay buffer!
I can see it. And I can feel it. I just need the courage to grasp it.
Right now, my fear is pushing me into information seeking mode. I should not be watching these lectures, but I can't help it. Eventually I will get tired of being in this state and just go for it. But until then, it is not too horrible of an idea to spend my time studying.
Ultimately, what I want to do will not be any of the papers as it is too heretical. I am going to have to do it on my own."
Upgrade for Version 1.06
Initial Loco:
-
The wire to count yellow inserters on the belt is before the inserters are made by the second assembler. Moved that to after the second assembler.
-
The two yellow inserters that put material on the blue science sushi immediately bottle neck research, upgraded those to blue. You should have those blue inserters by then. Also put blue inserters in feed and exit to wire and green assemblers (red circuit feed only). These also create an immediate bottleneck, but more importantly, you don't need to build these factories or the inserters until you are ready to make red circuits, and this also serves as a reminder. Also, with blue inserters here, you are good to maximum blue science production both with the assembler 1 and assembler 2.
-
The steel filter was 1000 steel. Which takes 5000 iron plates to make. Its good to have a few if you need to steal steel (don't you love english) to make medium electric poles but taking these takes ammo from defenders and slows research. Do you have a good reason to do that?
Fish Wagon, Wagon 1:
- New red ammo factories are short of yellow ammo and require a red underground. Unless you find two of the red undergrounds, it makes it impossible to do military research or create grenades with out a temporary redesign of this. Now you have two yellows, and five red ammo. My only concern is how much will this put pressure on research.
- Pretty much every playthrough someone has deleted the two extra battery factories, I don't know why, but perhaps its the fact that blue chips don't work or its too taxing on oil supply. In any case, I took those out and moved to what I typically see as a replacement.
- It doesn't happen a lot, but with Iron the critical problem until well after yellow science is working, there is no reason to fill 24 furnace with 100 steel each or 2400 steel - 12,000 iron plates/iron ore. Once the 300 steel buffer is full, stop making steel. I wired the inserters to not insert when buffer is near full.
- Chest limits. Inserts are limited to wait for 3 or 12 items before they move any material. This is nice for the first and last boxes because it provides a visual for how full the full buffer chain is, and I suspect is has a benefit of decoupling the fact that different inserters have different speeds and stack sizes. However other that the first and last set of boxes, it prevents you from seeing how much material is stored. Removed green wire on all but first and last boxes. Also fixed minor filter problem where coal filter had iron in it.
Sort and Store, Wagon 2 - Red belt production was not limited and I saw 3000 red belts and 900 red undergrounds in here on a game. This is 100,000 iron plates of belt no one is needing at the time. Un-ironically someone said, "how come we are out of iron and we have 38 players online?" I limited that to 200 belts and 100 undergrounds. Also fixed a power pole and added the roboport. I like the reactor, perhaps this can allow the removal of the efficiency chips in yellow and purple, which I use a lot to reduce pollution, but it slows you down and saves power in this game, so I suspect there is a power problem with just the coal/steam generators.
fixed background rendering
with shittiest workaround possible, late night programming. damn i gotta work tomorrow fuckk
"Finished" game, added ending classes and just whatever god awful game is this
this game fucking sucks
Commit
This platform is an act of ❤️. I hope you share this with your friends & family. I thank you for taking the time to do this practice with me. I am so grateful to journey alongside you. Sending blessings of prosperity, abundance, joy, and deep personal fulfillment. - Chai Forest | Captain/Curator Moonshots
Commit
This platform is an act of ❤️. I hope you share this with your friends & family. I thank you for taking the time to explore the technology solutions with having thoughtfully curated to help uplift, empower, and make your daily life a little bit more wondeful. I am so grateful to journey alongside you. Sending blessings of prosperity, abundance, joy, and deep personal fulfillment. - Chai Forest | Captain/Curator Moonshots
Update
- Changed the Church of Sin bonfire into the Swamp of Sin and moved it onto the wooden platform above the swamp in the Profaned Capital.
- Removed enemies near the Secluded Cloister bonfire.
- Fixed Rosaria/McDonnell relationship dialogue.
- Fixed Master Benjin's dialogue in m40.
- Added a new boss in the Profaned Capital swamp: the Fallen Protector and the Fair Maiden
- Drops Soul of the Maiden, which can be transposed into:
- Blueblood Sword
- Quicksilver Shield
- Life Ring
- The Quicksilver Set will appear in the swamp cave after they have been defeated.