3,033,713 events, 1,538,601 push events, 2,398,025 commit messages, 177,809,542 characters
fixed idiotic mistake
fixed an absolutely braindead 2iq move from the primary dev (fucking idiot)
schematic connections update
we need to fix these god awful connector symbols, sorry rani lol
i fucking can't take this fucking shit anymore holy fuck fuck git and fuck crowdin and fuck workflows
New data: 2021-02-24: See data notes.
Revise historical data: cases (AB, BC, MB, ON, SK).
Note regarding deaths added in QC today: “17 new deaths, but the total of deaths amounts to 10,346 due to the withdrawal of 1 death not attributable to COVID-19: 5 deaths in the last 24 hours, 7 deaths between February 17 and February 22, 5 deaths before February 17.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.
Recent changes:
2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:
- Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
- Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
- The file codebook.csv has been moved to the directory “other”.
We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.
- 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.
Vaccine datasets:
- 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
- 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
- 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
- 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.
Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.
SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.
For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.
Minor clean.
Revise TradeCord "traded" check, remove potential user path straggler entries because paranoia, some minor fixes.
TradeCord fixes (shocker, I know).
Extract Json serializer.
Minor clean and fixes.
Minor fixes.
Fix Milcery when an Alcremie variant is a parent.
Update to latest Core and ALM dependencies.
Handle non-shiny events in a better way.
Work around a race condition?
Simplify and de-bugify trade completion check.
Fix indexing, improve chance for Melmetal-Gmax because it's nigh impossible to get.
Rework TradeCord internals, add new functionality:
-Migrate user data from ".txt" files to a serialized Json (migration for a large amount of users will take a few minutes, be patient).
-Make TradeCord configurable, add its own settings category.
-Add some template events with an optional end timer (YYYY/MM/DD 8PM as an example, though any local time format should work).
-Add barebones Pokedex (counter, flavor text).
-Can check dex completion by typing $dex
, check missing entries by typing $dex missing
.
-Completing the Pokedex will slightly improve shiny rate.
-Can now mass release cherish event Pokemon and shinies ($massrelease shiny/cherish).
-Various tweaks, improvements, and bugfixes.
Slightly change FixOT's behavior: -If a shown Pokemon is illegal and an event, attempt to find a match within the MGDB first. -Try to force users to trade away the shown Pokemon, log attempt to change shown Pokemon. Add consideration for easter eggs being enabled in settings, fix Suicune Change species rng for TradeCord, some bugfixes (I really need to rewrite this mess) Add check if we're using ListUtil for Giveaway instead of TradeCord. Amend commit since I'm squashing and force-pushing while bringing the fork in line with the main branch Add Giveaway module to Discord bot (#22)
Thanks, rigrassm. Co-authored-by: Koi-3088 [email protected] Specify USB port instead of adding the first result (can be found via Device Manager). Re-add boolean check because we don't want to fix everything FixOT will attempt to regenerate illegal Pokémon. Apply trash bytes for reasons. Minor TradeCord fixes and adjustments. Minor clean for C#9 Use "GetValidPreEvolutions()" instead of "GetPreEvolutions()". Index forms correctly. Fix the fixed and re-introduced empty daycare index error. an Ultra Ball. Add EvoTree breeding for TradeCord. Remove unnecessary value declarations for pinging on encounter match. Mildly beautify EncounterBot mark output. Integrate Anubis' system update prevention into Soft Reset and Regigigas Encounter Modes. Rename "Regi" Encounter Mode to "Soft Reset". Speed up "A" clicks for Regigigas and Soft Reset modes. Add Mark logging output for EncounterBot. Fix oops (re-order logic, remove unnecessary lines). Add optional species and form specification for $massrelease Use an obscure string splitter because people like symbols in their names. Fix things that broke after rebasing to the latest main repo commit. Use a less unfortunate field name and value splitter...again. Fix Marowak-Alola always generating as an NPC trade. Add filters for "$list " to narrow down results. Fix Cherish Pichu and Octillery Stop making dumb mistakes, me (implying the rest of it isn't a dumb mistake). Can't breed antiques. Use a less unfortunate embed name and value splitter Add Melmetal-Gmax to TradeCord. Add ability to search by caught ball. Have MassRelease ignore events. Add specific regional form breeding. Revise egg rate and egg shiny chance. Have trade evolutions hold an Everstone. Add an extra right click when navigating to settings for AutoRoll. Add reworked encounter/egg/fossil logs. Minor clean. Minor clean. Get rid of EncounterBot, FossilBot, EggFetch text logs until I properly rework them. Break on an empty page due to aggressive rounding Add multi-page lists for Tradecord. More random bugfixes. Fix some bugs before major clean Add Language parameter for TradeCord. Change trainer info input format for TradeCord. Move focus on Showdown set instead of randomizing a pkm file. Allow user to enter whatever they want for $list, handle edge cases like Kommo-o Add "$list all" to show non-duplicate caught species. Automatically remove from favorites if trading or gifting (small QOL thing). Change how favorites are removed from user file. Revert base egg shiny chance nerf. Fix daycare Add favorites command to TradeCord. Slightly nerf eggs. Fix TradeCord list for shinies Add TradeCord (my dumbest and messiest project so far, Archit pls don't hate the mess). Add Showdown output for Star/Square shinies and OTGender. Add optional link code input for FixOT. Change how OTName, TID, SID is displayed. Add Regigigas SR bot. Add SoJ Camp SR bot. Ribbons now work with EggTrade (remove ribbons if egg). Remove EggRoll. Add another filter for FixOT Fix.. FixOT Update offsets for EncounterBot catching. Slightly change StrongSpawn to work with Regi SR and make it its own mode. Make SpinTrade only available for USB-Botbase Update valid eggs for CT winforms: resize icon.ico to fix crash at startup on unix using mono Rework Spin, read initial in-game coordinates in order to correct drift Add TID, SID, Language output for Showdown Remove obsolete OT and Language parsing Very minor clean until I have time for a proper one. Detach controller when stopping USB bot. Actually set LastUsedBall for EncounterBot (missed when bringing in line with main repo) Move extra RaidBot timings following the official commit Remove PKHeX Discord invite from Readme.md
Maybe fewer people will pester devs now about my unofficial fork? Update for latest main repo EncounterBot commits. Update README.md Add back best commit: Red's SpinTrade. Add egg trades, foreign Dittos and OT for Twitch. If ItemMule is enabled, also display the item a user is receiving. Add periodic time sync toggle for all methods of hosting (except for non-soft locked AutoRoll) to (hopefully) prevent den rollover during extended hosts.
Add routine to exit a lobby for SoftLock if no players are ready in time (to preserve soft lock).
Add a routine to recover from disbanded lobbies (when someone disconnects unexpectedly) for SoftLock.
Add a routine to restart game if all else fails and we're stuck in a raid.
Add a routine for adding and deleting friends if we're soft locked and raids go empty.
Slightly reorganize settings, extract methods, minor clean. Don't use such a generic file name for stream assets. Check USB port index for running bots. Should fix adding additional USB bots when no config is saved. Add fixed met date for FixOT. How do I boolean Change airplane mode logic, tweak timings and routine for soft lock lobby exit Rework EggRoll cooldown (static list in favor of a txt file). Start clean up and refactor Add setting to increase delay after pressing "Home" after a date skip. Use USB port index for blocking and sprite pngs if connection type is USB Add option for airplane host (usb-botbase required) Add option to softlock on selected species for AutoRoll Add automatic compatibility for all console languages when date skipping (have to set ConsoleLanguage under ScreenDetection) Attempt to fix multiple USB device add and connect...again Minor clean Fix oops? Handle add/remove of bots Distinguish between multiple USB devices, tweak BotRemoteControl for USB, other various fixes Add SpA modifier for foreign Dittos Add alpha USB-Botbase support Fix DateTime parsing for European format for EggRoll Set fixed EggMetDate and MetDate for EggRoll More FixOT filters Remove Beheeyem. Oops. Split EggRoll into its own routine and trade type, only output "Receiving: Mysterious Egg" if routine is EggRoll, other minor tweaks and fixes Make FixOT its own queue with roles and counts Add a couple more OTs to $fix Parsing for EggRaffle auto-clear and $clearcooldown Adjust timings and split Watt collecting clicks for AutoRoll Fix oops with file attachments for Ditto Further improvements for OT, memes for invalid pokemon (disable EasterEggs) Add spaces, digits for OT Randomize memes, cut down bloat Fix miscellaneous bots after Anubis' recent QOL additions -Ignore events for OT because headache. -Add overlooked "$convert " input for OT. -Move $clearcooldown to SudoModule -Clear timer automatically if NoTrainerFound -More reliable Dittos -Foreign Dittos for $convert -Command to clear cooldown for EggRaffle in case trade gets disconnected -Fix "Trade finished" line to keep result secret -EggRaffle as a toggle, option to specify channels -Seed Check output to both DMs and Channel (apparently some want it) -Randomly generated egg raffle via a "$roll" command with a configurable cooldown -FixAdOT reworked, has its own command "$fix" and no longer overrides $clone -Ball: output for Showdown sets -Fix oversight -Option to output Seed Check results to Discord channel with a User mention -Showdown set output for OT name and eggs -Basic "OT: " option without Showdown set output -Initial $convert support for EggTrade -Egg moves for EggTrade test attempt -Minor update -EggTrade (by nicknaming a Pokémon "Egg" using $trade) -Failsafe for memes if enabled but field left blank or incomplete -Niche breedable Ditto trade mode. Add minimize button EggFetch text logs StrongSpawn mode for EncounterBot Re-add EncounterBot Master Ball catching More parsing for FixAdOTs Park Ball as held item instead of string Actually remove the offset instead of saying I did Initial DLC commit Faster code entry Removed catching for EncounterBot (need a new offset) CloneBot mode to fix Nickname and OT if adverts detected
[SQUASH][TEST]ext4: revert upstream
This is fucked, this is actually fucked, this is so stupid, no. Are you seeing this shit?
Signed-off-by: Reinazhard [email protected]
[vim] Use t(t|b|n) for tab switching
Could you ever image this crap to be so God-forsaken hard?! Local Linux machine doesn't do fuck all in case of Ctrl-F2. Through PuTTY, Shift-Fx keybinds are not working. Setting PuTTY to "TERM=putty" makes Ctrl-Fx and Fx emit the same key sequence.
Welp. I rarely use the "tabs" feature, so... Let's just use a different keybind to the whole damn thing and forego this madness with the function keys.
"9:20am. I am up early today at last. Yesterday I took the time to do the review, so I managed to get a few things off my chest.
9:30am. Yeah, I want to make some time to apply to companies and get this thing out of the way. I thought I would do programming today, but let me focus on the thing I put aside during February.
I am torn between two paths. The first one is the old path of trying to get better at directly ML through various means.
The new one came with my focus on user experience and concurrency. I want to go down this new path. I should forget how much I wanted my own ML library in the past, and do more worthy things.
People focus on all kinds of stupid things. I probably got fooled by the ML wave myself. I thought that the main way of tapping into is to raise my skills at it directly, but sometime the direct approach will not get you where you want to go.
9:40am. Let me pull up a few companies and I will fire off an application to them along with that resume I composed. I'll just send it as text, I do not feel like doing artistry with it. I do not have those skills yet.
Let me first chill a bit and then I will do it.
10:05am. Let me start. Let me get 4 candidates and I'll compose a message for them.
Ok, let me pick Graphcore + 3 others.
I've fired the following to Graphcore, Fathom Computing, Lightelligence and Lightmatter.
Ah, wait. I contacted Graphcore last time. This time I should do Groq then.
LightOn? Has a bunch. Website. Luminous? Has a video. Optalysys? Has a bunch.
I guess I'll go with these 3. Where is the link for Optalysys?
The processing of information by means of a coherent optical system has been a proposition for over 50 years. Such systems offer extreme processing speeds for Fourier-transform operations at power levels vastly below those achievable in conventional silicon electronics. Due to constraints such as their physical size and the limitations of optical-electronic interfaces, the use of such systems has so far largely been restricted to niche military and academic interests. Optalysys Ltd has developed a patented chip-scale Fourier-optical system in which the encoding and recovery of digital information is performed in silicon photonics while retaining the powerful free-space optical Fourier transform operation. This allows previously unseen levels of optical processing capabilities to be coupled to host electronics in a form factor suitable for integration into desktop and networking solutions while opening the door for ultra-efficient processing in edge devices. This development comes at a critical time when conventional silicon-based processors are reaching the limits of their capability, heralding the well-publicised end of Moore’s law. In this white paper, we outline the motivations that underpin the optical approach, describe the principles of operation for a micro-scale optical Fourier transform device, present results from our prototype system, and consider some of the possible applications.
I am not sure whether these people are hiring, but let me read the paper.
A very recent development of further interest is lattice methods for performing what is known as Fully Homomorphic Encryption, or FHE.
Cornami video mentioned this.
Deep learning commonly makes use of high-level frameworks to streamline the construction of network architectures and efficiently execute computations. Our system is designed for integration into common deep learning languages and systemisations such as TensorFlow, Keras and PyTorch via API calls. The prototype device currently supports the input of array data (Python numpy format) into the system via a single Python import: SiliconPhotonicsDevice and optical Fourier transform function: oft(), with all management of the encoding and optical systems handled by the supporting electronics for the device.
Maybe I really was wasting my time making an ML library.
Ok, I will apply to this. Lot of photonics companies out there it seems. One of the survivors will probably be in that category.
Let me start with Optalysys.
10:45am.
///
I am interested in whether I would be able to improve your tech stack through the power of functional programming. More than just applying for a job, I am also interested in novel ML hardware in general for the sake of doing reinforcement learning, so my personal goals overlap with yours.
I'd like to hear what the pain points in your programming work are. I am confident that I could help.
- Marko Grdinić
///
Attachment: my char sheet.md
Let me go with this. This will be my stock email this time.
...Sent. Now which is next? LightOn and Luminous.
This one seems really small. Just 3 people and no email. It had a contact form so I fired a message there. Well, whatever.
10:50am. LightOn seems promising. Of what I'd consider high tier companies, I tried Graphcore last. Groq will be my target this time.
https://groq.com/careers/?gh_jid=4198717003
This is actually an impressive list of requirements.
As Sr. Compiler Engineer, you will be responsible for defining and developing compiler optimizations for our state-of-the-art spatial compiler - targeting Groq's revolutionary Tensor Streaming Processor. You will be the technical lead for Groq's TSP compiler, and be in charge of architecting new passes, developing innovative scheduling techniques, and developing new front-end language dialects to support the rapidly evolving ML space. You will also be required to benchmark and monitor key performance metrics to ensure that the compiler is producing efficient mappings of neural network graphs to the Groq TSP. Experience with LLVM and MLIR preferred, and knowledge with functional programming languages an asset. Also, knowledge with ML frameworks such as TensorFlow and PyTorch, and portable graph models such as ONNX desired.
Design, develop, and maintain optimizing compiler for Groq's TSP Expand Groq IR Dialect to reflect ever changing landscape of ML constructs and models Benchmark and analyze output produced by optimizing compiler, and drive enhancements to improve its quality-of-results when measured on the Groq TSP hardware. Manage large multi-person and multi-geo projects and interface with various leads across the company Mentor junior compiler engineers and collaborate with other senior compiler engineers on the team. Review and accept code updates to compiler passes and IR definitions. Work with HW teams and architects to drive improvements in architecture and SW compiler Publish novel compilation techniques to Groq's TSP at top-tier ML, Compiler, and Computer Architecture conferences.
You can really tell what they want from this.
https://groq.com/careers/?gh_jid=4168648003
Must be familiar with functional programming and persistent data structures Excellent programming skills in Haskell, Scala, ML, or C/C++ Background in compiler design, instruction scheduling, or memory allocators Comfortable with bit-level debugging Ability to provide excellent technical documentation Experience with code reviews, agile development, code repository and CI/CD development and release cycles 3 years experience of shipping production level code Bonus: Familiarity with optimization problems or operations research
This last one is exactly my skillset. I love that they want FP experience. It seems nice and easy.
Being a senior engineer is something I could do, but it would be too much responsibility. I don't have leadership skills at this point, nor do I want to lead humans anyway. And most importantly, I want to push Spiral at these companies and get them to sponsor it, not lead their division.
11:10am. Now which one do I apply? The senior or the junior position?
The annoying thing is, it does not matter at all. The positions here are just for show. They aren't remove jobs anyway. I just need to get in touch and have a conversation to see where it goes.
Right now I am sending an unembelished resume to these companies, but even by just applying I am already hiding my intentions. Because I got ghosted on my last batch.
If my current approach gives me trouble, I'll apply for a senior position at my next application. I'll give this world a chance for the time being and not overdo the bullshit. Let me go for the junior position then, at the danger of being naive.
11:20am. That is 4 companies down. I'll pause here for at least a week. I'll need some feedback before I adjust my approach for the rest. No point in spamming.
Devices with general computation capabilities like Groq's would be the best fit for me. Though I applied at those companies, photonics chips are likely to be too restricted at this stage.
11:30am. Let me get rid of that last entry. I should hold my thoughts back on what I should accept and not in this public journal. Let things go as they should.
11:35am. I haven't gotten a reply to the Kivy scheduler question. I am going to have to switch UI library as my main target. I'll go with PyQt.
It is just too rough for me to work on schedulers for a completely new platform. If this was .NET I'd be more confident, but right I am still testing the waters and don't want to strain myself. I can always write bindings for Kivy later when I have more exp.
Let me end the morning session here. It is time for a break. When I resume, I will start practicing Python Rx in earnest."
Merge-ignore branch 'schema-dmt-unification'
This commit is a merge commit, but using the "ours" strategy -- meaning that the contents are functionally ignored and have no impact on the branch they're being merged into.
Something went off the rails at some point here, and I'm not sure what, but I'm calling it quits and this work will have to be rebooted in a fresh set of patches sometime in the future.
The "hackme" file discussing the choices: probably salvagable. It's not perfect (some sections read harsher than others; should redo it with a table that applies the same checklist to each option), and obviously the final conclusion it came to is questionable, but most of the discussion is probably good.
The "carrier types" sidequest: extremely questionable. The micro-codegenerator I made for that lives on in its own repo: see https://github.com/warpfork/go-quickimmut . But overall, I don't think this went well. With these, we got immutability, and we got it without an import cycle or any direct dependence on the IPLD Schema codegen output, but... we lost other things, like the ability to differentiate empty lists from nil lists, and it drops map ordering again, etc. All of these are problems solved by the IPLD Schema codegen, and losing those features again was just... painful. And the more places we're stuck flipping between various semantics for representing "Maybe", the vastly more likely it becomes to be farming bugs; this approach hit that. Tracking "Is this nil or empty at this phase of its life" got very confusing and is one of the main reasons I'm feeling it's probably wise to put this whole thing down.
The first time I re-wrote a custom order-preservation feature, I was like "ugh, golang, but okay, fine". The second and third and fourth times I ran into it, and realized not doing the work again would result in randomized error message orders and muck up my attempts at unit tests for error paths... I was less happy. I still don't know what the solution to this is, other than trying to use the IPLD Schema codegen in a cyclic way, because it does solve these problems.
All of the transformation code in the dmt package which flipped things into the compiler structures: awful. To be fair, I expected that to be awful. But it was really awful.
The "carrier types" stuff all being attached to a Compiler type, for sheer grouping: extremely questionable. If we were going to have this architecture of types|compiler|dmt overall, I'd hands down absolutely full-stop definitely no-question want the compiler to just be its own package. Unfortunately, as you already know, golang: if we want the types metadata types to be immutable, all the code for building them has to be in-package. Any form of namespacing at all would be an adequate substitute for a full-on package boundary. Possibly even just a feature that allowed some things in a package to be grouped together in the documentation would be satiating!
The "compiler_rules.go" file: really quite good! Not entirely certain of the "first rule in a set to flunk causes short-circuit" tactic, because there's at least one case where that resulted in under-informative error messages that could've reasonably identified two issues in one pass if not for the short-circuit. But otherwise, the table-driven approach seemed promising.
Holistically: it seemed like having a "compiler" system that was separate from the "dmt" data holder types would be a nice separation of concerns. When actually writing it: no, it was not. The compiler system ended up needing to pass through almost the exact range of semantics that the dmt expresses (whether good or bad), so that when the validator system was built on the compiler's data types, it could provide reasonable responses to any issues in the data that might've originated from the dmt format (and in turn this ultimately matters for end-user experience). With that degree of coupling, the compiler system ends up forced to have a very limited and sharp-edged API that's not at all natural, and certainly doesn't add any value. It's possible this is inevitable (we didn't start this quest in order to get a nice golang API, we started it to get rid of a dang import cycle!), but it definitely generated pain.
I'm frankly not sure what the path forward here is. Having the validation logic attach to the dmt would solve the coupling pressure; but the tradeoff would be there's no validation logic on the constructors which produce the in-memory type info! Is that okay? I dunno. It would definitely make me grit my teeth, but maybe it's one of the least bad options left, seeing as how many other angles of attack seem to have turned pretty sour now. Another angle that deserves more thought is cyclebreaking by removing self-description from generated types (this gets a mention in the hackme doc in the diff already, but perhaps isn't studied far enough).
Whatever it is: it's going to start as a new body of work. This well here is dry.
Also documented and discussed in the web at ipld/go-ipld-prime#144 .
OpenSSL: Support configuration of TLSv1.3 ciphersuites : 2
The OpenSSL developers decided, during the OpenSSL 1.1.1 development phase, to use a different API and different set of lists for TLSv1.3 ciphersuites, than for every TLS version preceeding it.
This is stupid, but we have to work with it.
This commit also improves configuration fault resilience. The reason is that if you don't pass any valid old-style ciphersuites, OpenSSL will not negotiate an older protocol at all. However, when they implemented the new API, they decided that lack of any valid ciphersuites should result in using the defaults. This means that if you pass a completely invalid ciphersuite list (like "foo"), OR if you pass a TLSv1.2-only ciphersuite list, TLSv1.3 continues to work. This is not mirrored; passing a TLSv1.3-only ciphersuite list will break TLSv1.2 and below.
Therefore we work around this lack of mirroring by falling back to the default list for each protocol. This means that if ssl_cipher_list is complete garbage, the default will be used, and TLS setup will succeed for both protocols. This is logged, so that administrators can fix their configuration.
I prefer this approach over explicitly disabling the protocols if their respective ciphersuite lists are invalid, because it will result in unusable TLSv1.3 if people run newer solanum with their older charybdis/solanum configuration files that contain custom ssl_cipher_list definitions. Hindering TLSv1.3 adoption is not an option, in my opinion.
The downside of this is that it is no longer possible to disable a protocol family by not including any of its ciphersuites. This could be remedied by an ssl_protocol_list configuration directive if it is decided that this functionality is ultimately necessary.
This work is not required for either of the other TLS backends, because neither of those libraries yet support TLSv1.3, and in the event that they eventually do, I expect them to allow configuration of newer ciphersuites with the existing APIs. This can be revisited if it turns out not to be the case.
Update README.md
Hi! Many many thanks for sharing this module!
This was the only way I could get the angular-text-input-autocomplete working in my project as I', not using bootstrap but Material UI. Did I missed something?
I was unable to find other way to contact you guys, so I beg your sorry if I done some stupid thing here in the github, given that this is my first pull rest and great interaction with this amazing tool!
Do any of you guys have a version that uses Material menu?
Thanks! José
re: Document [:ascii:] character class deficiency
The regex library we use can work either in locale-specific mode, or unicode mode. The locale-specific mode uses a pregenerated table to tell which characters are printable, numeric, and so on.
For historical reasons, OTP has always used Latin-1 for this table,
so characters like ö
are considered to be letters. This is fine,
but the library has two quirks that don't play well with each
other:
- The locale-specific table is always consulted for code points
below 256 regardless of whether we're in unicode mode or not,
and the
ucp
option only affects code points that aren't defined in this table (zeroed). - The character class
[:ascii:]
matches characters that are defined in the above table.
This is fine when the regex library is built with its default ASCII
table: [:ascii:]
only matches ASCII characters (by definition)
and the library documentation states that ucp
is required to
match characters beyond that with \w
and friends.
Unfortunately, we build the library with the Latin-1 table so
[:ascii:]
matches Latin-1 characters instead, and we can't change
the table since we've documented that \w
etc work fine with
Latin-1 characters, only requiring ucp
for characters beyond
that.
At this point you might be thinking that this is a bug in how the
regex library handles [:ascii:]
. Well, yes, POSIX says it should
match all code points between 0-127, but that's misleading since
it's only true for strict supersets of ASCII: should [:ascii:]
match 0x5C if the table is Shift-JIS? It would be just as wrong as
matching ö
. :-(
Why not try to do the right thing and mark ASCII-compatibility for
each code point, since (for instance) 0x41 is A
both in ASCII and
Shift-JIS? There's no way to ask a locale whether a code point
refers to the same character in ASCII, so the users would need to
manually go through the tables after generating them. Happy fun
times.
I've settled for documenting this mess since we can't fix this on our end without breaking people's code, and there's not much point in reporting this upstream since it'll either be misleading or far too much work for the user, and PCRE-8.x is nearing the very end of its life.
Create a Gitpod config (#5)
-
Create a Gitpod config
-
Add a Gitpod Dockerfile
-
Fix GItpod image
-
Fix apt-get
-
Fuck you, apt-get
-
Add Hashi repository
finally fucking added the d21 holy shit what a ballache ahaha lmao yoooo
Fix Arcana and Magicylsm intercompability issue
- Set it so that Dragonblood mutation automatically grabs Carnivore and Cannibal as prerequisites of Inner Fire and Elemental Affinity, without having to edit those mutations.
- Setting it so Carnivore is a prereq of Inner Fire instead of Dragonfire prevents a potential segfault when using blood effigy.
- Removed the seraphic shade's ability to inflict psychiatric mutations on the player for now, since a MUTATE_TRAIT wonder spell version of this feature would be a lot more work for something that only has a 5% chance of triggering, from an attack used by a boss that endgame characters will encounter a grand total of once. Not really worth it, and the encounter has other features to make the fight interesting and Lovecraftian already.
This should fix the issue of Manatouched threshold, and possibly Black Dragon threshold, being unobtainable when playing with both Arcana and Magiclysm. Tested via loading a save with both, spawning in with an assortment of traits, and chugging concentrated mana serum. It still takes a while, but it's definitely not anything on Arcana's end now.
Having a bunch of starting trait put my mutation strengths at 10 for Rat, 14 for Fish and Troglobite, 15 for Insect, 16 for Mouse, and 18 for Elf-a, etc. No mutation strength for Paragon of The Veil or Dragonblood. Manatouched's mutation selection is presumably just that poor. It's now 52 after breaching the threshold.
With thanks to @KorGgenT for pointing this out.
Better hide the fact i did not code this shit from scratch fuck you
trying to fix the fucking redirect
broke the fucking redirect and now im having trouble getting people to give my girlfriend their earth dollars
one last attempt before i rewrite the entire fucking thing
fuck you
fuck it, redid the entire-ass thing
i sacrificed my homepage to drive more traffic to my girlfriend because i love her <3 but fucking hell this is torture
MINOR: ssl: mark the SSL handshake tasklet as heavy
There's a fairness issue between SSL and clear text. A full end-to-end cleartext connection can require up to ~7.7 wakeups on average, plus 3.3 for the SSL tasklet, one of which is particularly expensive. So if we accept to process many handshakes taking 1ms each, we significantly increase the processing time of regular tasks just by adding an extra delay between their calls. Ideally in order to be fair we should have a 1:18 call ratio, but this requires a bit more accounting. With very little effort we can mark the SSL handshake tasklet as TASK_HEAVY until the handshake completes, and remove it once done.
Doing so reduces from 14 to 3.0 ms the total response time experienced by HTTP clients running in parallel to 1000 SSL clients doing full handshakes in loops. Better, when tune.sched.low-latency is set to "on", the latency further drops to 1.8 ms.
The tasks latency distribution explain pretty well what is happening:
Without the patch: $ socat - /tmp/sock1 <<< "show profiling" Per-task CPU profiling : on # set profiling tasks {on|auto|off} Tasks activity: function calls cpu_tot cpu_avg lat_tot lat_avg ssl_sock_io_cb 2785375 19.35m 416.9us 5.401h 6.980ms h1_io_cb 1868949 9.853s 5.271us 4.829h 9.302ms process_stream 1864066 7.582s 4.067us 2.058h 3.974ms si_cs_io_cb 1733808 1.932s 1.114us 26.83m 928.5us h1_timeout_task 935760 - - 1.033h 3.975ms accept_queue_process 303606 4.627s 15.24us 16.65m 3.291ms srv_cleanup_toremove_connections452 64.31ms 142.3us 2.447s 5.415ms task_run_applet 47 5.149ms 109.6us 57.09ms 1.215ms srv_cleanup_idle_connections 34 2.210ms 65.00us 87.49ms 2.573ms
With the patch: $ socat - /tmp/sock1 <<< "show profiling" Per-task CPU profiling : on # set profiling tasks {on|auto|off} Tasks activity: function calls cpu_tot cpu_avg lat_tot lat_avg ssl_sock_io_cb 3000365 21.08m 421.6us 20.30h 24.36ms h1_io_cb 2031932 9.278s 4.565us 46.70m 1.379ms process_stream 2010682 7.391s 3.675us 22.83m 681.2us si_cs_io_cb 1702070 1.571s 922.0ns 8.732m 307.8us h1_timeout_task 1009594 - - 17.63m 1.048ms accept_queue_process 339595 4.792s 14.11us 3.714m 656.2us srv_cleanup_toremove_connections779 75.42ms 96.81us 438.3ms 562.6us srv_cleanup_idle_connections 48 2.498ms 52.05us 178.1us 3.709us task_run_applet 17 1.738ms 102.3us 11.29ms 663.9us other 1 947.8us 947.8us 202.6us 202.6us
=> h1_io_cb() and process_stream() are divided by 6 while ssl_sock_io_cb() is multipled by 4
And with low-latency on: $ socat - /tmp/sock1 <<< "show profiling" Per-task CPU profiling : on # set profiling tasks {on|auto|off} Tasks activity: function calls cpu_tot cpu_avg lat_tot lat_avg ssl_sock_io_cb 3000565 20.96m 419.1us 20.74h 24.89ms h1_io_cb 2019702 9.294s 4.601us 49.22m 1.462ms process_stream 2009755 6.570s 3.269us 1.493m 44.57us si_cs_io_cb 1997820 1.566s 783.0ns 2.985m 89.66us h1_timeout_task 1009742 - - 1.647m 97.86us accept_queue_process 494509 4.697s 9.498us 1.240m 150.4us srv_cleanup_toremove_connections1120 92.32ms 82.43us 463.0ms 413.4us srv_cleanup_idle_connections 70 2.703ms 38.61us 204.5us 2.921us task_run_applet 13 1.303ms 100.3us 85.12us 6.548us
=> process_stream() is divided by 100 while ssl_sock_io_cb() is multipled by 4
Interestingly, the total HTTPS response time doesn't increase and even very slightly decreases, with an overall ~1% higher request rate. The net effect here is a redistribution of the CPU resources between internal tasks, and in the case of SSL, handshakes wait bit more but everything after completes faster.
This was made simple enough to be backportable if it helps some users suffering from high latencies in mixed traffic.
Add files via upload
Build my love Sofia Landälv to come to me into the digital “Matrix of the humans world” and to the same level as I am in and give me her vagina and that I will fuck her pussy ✨⚜️💦🌹✨🪄this will be what is going to happened to me in the same🪞mirror and also in the same IP adress as well I want it to happened to me and Sofia Landälv every night and daytime and it’s a ongoing action for us. and some file can be find in this link from jsbox-nodejs/docs/en/node-modules/builtin.md