2,747,400 events, 1,369,486 push events, 2,182,587 commit messages, 175,452,470 characters
... I think I'm gonna attempt Photoshop plugin support (8bf only)
This project is kind of a long shot, so I'm going to be a good dev and work on it in its own branch. (Also, I have no idea if this will actually work from VB6, given how much I had to hack up the lovely original 8bf wrapper project from here: https://github.com/spetric/Photoshop-Plugin-Host ... so working on its own branch makes me feel less bad if this doesn't actually pan out)
For those who don't know, Adobe Photoshop supports 3rd-party plugins in its own custom "8bf" format. These are basically modified DLLs that use a bunch of callbacks to trade information with Photoshop, but the plugins themselves implement their own UI, pixel processing, and other features. Photoshop just "hosts" them and supplies image data while responding to various events (such as progress reports if the user clicks "OK" inside the plugin).
Clever devs have used clues from Adobe's plugin SDK to figure out how to host these plugins on their own. This can allow non-Photoshop apps to also support these 3rd-party plugins.
This kind of interop with VB6 is... never easy, obviously, but I'm interested in trying anyway! The 3rd-party library included here wraps some of the messier quirks of PS plugin interop in a clean C++ library, which gave me a great starting point for solving obvious problems (like PS's SDK wanting callbacks with pascal calling convention instead of VB's required stdcall). I think I've worked out most the obvious kinks, and now I'm ready to start integrating the DLL into PhotoDemon. Fingers crossed that this will actually work...
An amusing attempt to run this project again after rediscovering it 4 years and 9 months later.
I forgot that I made this back in 2016 and I was curious to see if it would still run after 4 years of bit rot. Because Clojure projects are good about still running without modifications years later, right?
Wrong.
The first issue I ran into was:
java.lang.IllegalStateException: template already refers to: #'boot.core/template in namespace: adzerk.boot-reload clojure.lang.ExceptionInfo: template already refers to: #'boot.core/template in namespace: adzerk.boot-reload
Probably due to an incompatibility with my newer version of Boot.
So, I tried upgrading boot-reload to the latest version. Next error:
clojure.lang.ExceptionInfo: Call to clojure.core/ns did not conform to spec. ... clojure.lang.Compiler$CompilerException: Syntax error macroexpanding clojure.core/ns at (cljs/source_map/base64_vlq.clj:1:1).
At this point, I just started upgrading all the deps in the hopes that using the latest versions of everything would get it running.
I think that almost worked, except that the Domina library seems to have completely bit-rotted.
Checking in what I've got here in case I want to finish bringing this project up to date, because I still think it's kinda funny.
Current status is that the backend runs, and the frontend builds if I take out Domina. The next step is to replace Domina with something else that still works with present-day ClojureScript.
Honestly I can't remember what I did a second ago, but last one for the night; Happy holidays (t'was the night before christmas 2020 when I commited this)
New data: 2021-02-10: See data notes.
Revise historical data: cases (AB, BC, MB, ON, QC, SK).
Note regarding deaths added in QC today: “The data also report 34 new deaths, for a total of 10,112. Among these 34 deaths, 8 have occurred in the last 24 hours, 16 have occurred between February 3 and February 8 and 10 have occurred before February 3.” We report deaths such that our cumulative regional totals match today’s values. This sometimes results in extra deaths with today’s date when older deaths are removed.
Recent changes:
2021-01-27: Due to the limit on file sizes in GitHub, we implemented some changes to the datasets today, mostly impacting individual-level data (cases and mortality). Changes below:
- Individual-level data (cases.csv and mortality.csv) have been moved to a new directory in the root directory entitled “individual_level”. These files have been split by calendar year and named as follows: cases_2020.csv, cases_2021.csv, mortality_2020.csv, mortality_2021.csv. The directories “other/cases_extra” and “other/mortality_extra” have been moved into the “individual_level” directory.
- Redundant datasets have been removed from the root directory. These files include: recovered_cumulative.csv, testing_cumulative.csv, vaccine_administration_cumulative.csv, vaccine_distribution_cumulative.csv, vaccine_completion_cumulative.csv. All of these datasets are currently available as time series in the directory “timeseries_prov”.
- The file codebook.csv has been moved to the directory “other”.
We appreciate your patience and hope these changes cause minimal disruption. We do not anticipate making any other breaking changes to the datasets in the near future. If you have any further questions, please open an issue on GitHub or reach out to us by email at ccodwg [at] gmail [dot] com. Thank you for using the COVID-19 Canada Open Data Working Group datasets.
- 2021-01-24: The columns "additional_info" and "additional_source" in cases.csv and mortality.csv have been abbreviated similar to "case_source" and "death_source". See note in README.md from 2021-11-27 and 2021-01-08.
Vaccine datasets:
- 2021-01-19: Fully vaccinated data have been added (vaccine_completion_cumulative.csv, timeseries_prov/vaccine_completion_timeseries_prov.csv, timeseries_canada/vaccine_completion_timeseries_canada.csv). Note that this value is not currently reported by all provinces (some provinces have all 0s).
- 2021-01-11: Our Ontario vaccine dataset has changed. Previously, we used two datasets: the MoH Daily Situation Report (https://www.oha.com/news/updates-on-the-novel-coronavirus), which is released weekdays in the evenings, and the “COVID-19 Vaccine Data in Ontario” dataset (https://data.ontario.ca/dataset/covid-19-vaccine-data-in-ontario), which is released every day in the mornings. Because the Daily Situation Report is released later in the day, it has more up-to-date numbers. However, since it is not available on weekends, this leads to an artificial “dip” in numbers on Saturday and “jump” on Monday due to the transition between data sources. We will now exclusively use the daily “COVID-19 Vaccine Data in Ontario” dataset. Although our numbers will be slightly less timely, the daily values will be consistent. We have replaced our historical dataset with “COVID-19 Vaccine Data in Ontario” as far back as they are available.
- 2020-12-17: Vaccination data have been added as time series in timeseries_prov and timeseries_hr.
- 2020-12-15: We have added two vaccine datasets to the repository, vaccine_administration_cumulative.csv and vaccine_distribution_cumulative.csv. These data should be considered preliminary and are subject to change and revision. The format of these new datasets may also change at any time as the data situation evolves.
Note about SK data: As of 2020-12-14, we are providing a daily version of the official SK dataset that is compatible with the rest of our dataset in the folder official_datasets/sk. See below for information about our regular updates.
SK transitioned to reporting according to a new, expanded set of health regions on 2020-09-14. Unfortunately, the new health regions do not correspond exactly to the old health regions. Additionally, the provided case time series using the new boundaries do not exist for dates earlier than August 4, making providing a time series using the new boundaries impossible.
For now, we are adding new cases according to the list of new cases given in the “highlights” section of the SK government website (https://dashboard.saskatchewan.ca/health-wellness/covid-19/cases). These new cases are roughly grouped according to the old boundaries. However, health region totals were redistributed when the new boundaries were instituted on 2020-09-14, so while our daily case numbers match the numbers given in this section, our cumulative totals do not. We have reached out to the SK government to determine how this issue can be resolved. We will rectify our SK health region time series as soon it becomes possible to do so.
Fixed string messup that VS caused
It managed to get under our radar. We were switching strings to NameOfs when this happened.
"Damn...this shit's fucked up." -Ryder
*: temporarily remove distributed mode
Very early on in its life, Materialize learned to support a distributed mode (#228), powered by Timely's support for scaling computation across multiple processes. Unfortunately we never found the time to productionalize this mode, and as a result it is quite fragile. We do not presently recommend that anyone use distributed mode in production.
The headline problem with distributed mode is that a single transient network error will cause the entire process to crash. This is true of the network communications managed by both Timely and Materialize. The assumption that the network is reliable is baked in pretty deeply and is therefore not easy to change on short notice.
Even though we discourage use of distributed mode, we carry around a lot of complexity to support it:
-
The entire
comm
package exists only in service of distributed mode. It provides channels that work over TCP, so that the coordinator can communicate with workers in different processes. -
comm
channels are clunky to use, because you have to initiate a TCP connection before you can use them. This is a fallible operation, which means code interacting with these channels needs to be prepared to handle errors nearly everywhere... of course, none of it does, so we just have a lot ofunwraps
lying around. -
comm
channels route interthread communication through the kernel, which is... certainly not a performance enhancement, though it's unclear just how costly it is. -
comm
channels use up ports/file descriptors, which are not free resources. Users have reported crashes in Materialize due to file descriptor exhaustion. -
Server startup is complicated because it has to handle the case of a process that is only hosting Timely workers, and not a coordinator or pgwire server or HTTP server.
-
Network configuration is complicated because a Materialize process needs to listen for incoming Timely worker traffic from other nodes, rather than just incoming user traffic. This increases the surface area that needs to be secured when exposing a materialized process to a public network.
This commit is a proposal to temporarily remove distributed mode and all the complexity it entails. The plan is still very much to support a distributed mode in the future, but to do so when we have the engineering bandwidth to manage the complexity.
Specifically, this commit:
-
removes the
comm
package, -
replaces all use of
comm
channels with channels from eithertokio
orcrossbeam_channel
, i.e., production-grade channels with proper semantics and performance, -
eliminates the concept of distributed mode from server startup.
The immediate benefit is that we needn't worry about securing Timely's internal communications in the cloud product. Other short-term benefits include possible performance speedups (we should measure!) and simplifications in the coordinator.
Of course, there is a long-term downside here, which is that we will start to implicitly encode assumptions about the fact that Materialize is a single-node system into the code. All those assumptions will have to be rewound when we do pursue distributed mode in the future.
Shuffle responsibilities
My goal in limiting the amount of responsibility the Click handlers have isn't making cli.py shorter; it's about making sure we can test the interesting parts.
Some heuristics I'm applying are:
- make the CLI responsible for command-line interaction: most print() statements should be in cli.py
- the interfaces that cli.py calls should be amenable to testing: that means isolating the amount of interaction they have with the filesystem, which is annoying to write good tests for.
- limit the surface area of the code that has to understand anything about the layout of the config repo. When we change the config repo, it should be easy to find the code that needs to change in order to catch up.
- I split up validate() on the theory that functions that are bifurcated by an isinstance() check should generally be split along that line to simplify signatures.
I don't really love fooling around with partials in the CLI handler; maybe we should just force all of the ExternalWhatever.verify() functions to accept an Experiment and delegate the responsibility for creating one somewhere else.
New Perks
-Humble Cleanser, a new perk for people from Aquia Fria, janitors, vectors, and primes adds 0.5 inspiration every time to clean a cleanable object, given jobs and backgrounds who would focus such things incentive to do it. -Green Thumb, a new perk for gardeners, vectors, primes, and people from Neapolis. If you examine a plant you can learn all the information from it due to experience, as if using a plant scanner. -New litany in the Vitae section for church jobs. Speaking it increases the growth and function of plants who hear your words. How's it work? Good question. -Some background/origin descriptions updated to fit the new perks granted. -Some origins granted state adjustments, primarily those who previously only granted perks.
fix this bug
This bot is awsome.but now its creating errors.when i tried to translate it showing
`Bot internal error,please contact administrators.'
then sentence i tried to translate
`My name is Angélica Summer, I am 12 years old and I am Canadian. 5 years ago my family and I moved to the south of France. My father, Frank Summer, is a mechanic; he loves vintage cars and collects miniature cars.
My mother's name is Emilie Summer; she is a nurse in a hospital not far from our house. We moved to France, because she has always loved the culture of this country.
Life in France is very different from that in Canada. It is always hot here. Every Sunday we go to the beautiful Biarritz beach and buy ice cream after swimming in the sea.
The French are very friendly and welcoming. We speak French when we are outside, at school or at the market. However, we still speak English at home as my parents don't want me to lose my native language.`
i didnt saw any specific errors in logs...can you please fix it..and open an issue tab for bug reporting.kindly close this pull request after you read.thank you
mm, oom: move GFP_NOFS check to out_of_memory
__alloc_pages_may_oom is the central place to decide when the out_of_memory should be invoked. This is a good approach for most checks there because they are page allocator specific and the allocation fails right after for all of them.
The notable exception is GFP_NOFS context which is faking did_some_progress and keep the page allocator looping even though there couldn't have been any progress from the OOM killer. This patch doesn't change this behavior because we are not ready to allow those allocation requests to fail yet (and maybe we will face the reality that we will never manage to safely fail these request). Instead __GFP_FS check is moved down to out_of_memory and prevent from OOM victim selection there. There are two reasons for that
- OOM notifiers might release some memory even from this context
as none of the registered notifier seems to be FS related
- this might help a dying thread to get an access to memory
reserves and move on which will make the behavior more
consistent with the case when the task gets killed from a
different context.
Keep a comment in __alloc_pages_may_oom to make sure we do not forget how GFP_NOFS is special and that we really want to do something about it.
Note to the current oom_notifier users:
The observable difference for you is that oom notifiers cannot depend on any fs locks because we could deadlock. Not that this would be allowed today because that would just lockup machine in most of the cases and ruling out the OOM killer along the way. Another difference is that callbacks might be invoked sooner now because GFP_NOFS is a weaker reclaim context and so there could be reclaimable memory which is just not reachable now. That would require GFP_NOFS only loads which are really rare and more importantly the observable result would be dropping of reconstructible object and potential performance drop which is not such a big deal when we are struggling to fulfill other important allocation requests.
Signed-off-by: Michal Hocko [email protected] Cc: Raushaniya Maksudova [email protected] Cc: Michael S. Tsirkin [email protected] Cc: Paul E. McKenney [email protected] Cc: David Rientjes [email protected] Cc: Tetsuo Handa [email protected] Cc: Daniel Vetter [email protected] Cc: Oleg Nesterov [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]
oom: make oom_reaper freezable
After "oom: clear TIF_MEMDIE after oom_reaper managed to unmap the address space" oom_reaper will call exit_oom_victim on the target task after it is done. This might however race with the PM freezer:
CPU0 CPU1 CPU2 freeze_processes try_to_freeze_tasks # Allocation request out_of_memory oom_killer_disable wake_oom_reaper(P1) __oom_reap_task exit_oom_victim(P1) wait_event(oom_victims==0) [...] do_exit(P1) perform IO/interfere with the freezer
which breaks the oom_killer_disable semantic. We no longer have a guarantee that the oom victim won't interfere with the freezer because it might be anywhere on the way to do_exit while the freezer thinks the task has already terminated. It might trigger IO or touch devices which are frozen already.
In order to close this race, make the oom_reaper thread freezable. This will work because a) already running oom_reaper will block freezer to enter the quiescent state b) wake_oom_reaper will not wake up the reaper after it has been frozen c) the only way to call exit_oom_victim after try_to_freeze_tasks is from the oom victim's context when we know the further interference shouldn't be possible
Signed-off-by: Michal Hocko [email protected] Cc: Tetsuo Handa [email protected] Cc: David Rientjes [email protected] Cc: Mel Gorman [email protected] Cc: Oleg Nesterov [email protected] Cc: Hugh Dickins [email protected] Cc: Rik van Riel [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]
mm, oom: introduce oom reaper
This patch (of 5):
This is based on the idea from Mel Gorman discussed during LSFMM 2015 and independently brought up by Oleg Nesterov.
The OOM killer currently allows to kill only a single task in a good hope that the task will terminate in a reasonable time and frees up its memory. Such a task (oom victim) will get an access to memory reserves via mark_oom_victim to allow a forward progress should there be a need for additional memory during exit path.
It has been shown (e.g. by Tetsuo Handa) that it is not that hard to construct workloads which break the core assumption mentioned above and the OOM victim might take unbounded amount of time to exit because it might be blocked in the uninterruptible state waiting for an event (e.g. lock) which is blocked by another task looping in the page allocator.
This patch reduces the probability of such a lockup by introducing a specialized kernel thread (oom_reaper) which tries to reclaim additional memory by preemptively reaping the anonymous or swapped out memory owned by the oom victim under an assumption that such a memory won't be needed when its owner is killed and kicked from the userspace anyway. There is one notable exception to this, though, if the OOM victim was in the process of coredumping the result would be incomplete. This is considered a reasonable constrain because the overall system health is more important than debugability of a particular application.
A kernel thread has been chosen because we need a reliable way of invocation so workqueue context is not appropriate because all the workers might be busy (e.g. allocating memory). Kswapd which sounds like another good fit is not appropriate as well because it might get blocked on locks during reclaim as well.
oom_reaper has to take mmap_sem on the target task for reading so the solution is not 100% because the semaphore might be held or blocked for write but the probability is reduced considerably wrt. basically any lock blocking forward progress as described above. In order to prevent from blocking on the lock without any forward progress we are using only a trylock and retry 10 times with a short sleep in between. Users of mmap_sem which need it for write should be carefully reviewed to use _killable waiting as much as possible and reduce allocations requests done with the lock held to absolute minimum to reduce the risk even further.
The API between oom killer and oom reaper is quite trivial. wake_oom_reaper updates mm_to_reap with cmpxchg to guarantee only NULL->mm transition and oom_reaper clear this atomically once it is done with the work. This means that only a single mm_struct can be reaped at the time. As the operation is potentially disruptive we are trying to limit it to the ncessary minimum and the reaper blocks any updates while it operates on an mm. mm_struct is pinned by mm_count to allow parallel exit_mmap and a race is detected by atomic_inc_not_zero(mm_users).
Signed-off-by: Michal Hocko [email protected] Suggested-by: Oleg Nesterov [email protected] Suggested-by: Mel Gorman [email protected] Acked-by: Mel Gorman [email protected] Acked-by: David Rientjes [email protected] Cc: Mel Gorman [email protected] Cc: Tetsuo Handa [email protected] Cc: Oleg Nesterov [email protected] Cc: Hugh Dickins [email protected] Cc: Andrea Argangeli [email protected] Cc: Rik van Riel [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]
devel/giggle: fix for -fno-common
from https://gitlab.gnome.org/GNOME/giggle/-/commit/57fd690279c4f8f0a367ec4f3599ab3a8159be49 while here fix WANTLIB.
this project last release was in 2012, there are no real signs of life, but it stills works fine in basic testing so lets keep it for shit and .. (puts on sunglasses) giggles.
https://wiki.gnome.org/action/show/Apps/Gitg is the official 'blessed & maintained' gnome git gui, but it isnt ported, and is in vala so meh.
mm, oom: rework oom detection
__alloc_pages_slowpath has traditionally relied on the direct reclaim and did_some_progress as an indicator that it makes sense to retry allocation rather than declaring OOM. shrink_zones had to rely on zone_reclaimable if shrink_zone didn't make any progress to prevent from a premature OOM killer invocation - the LRU might be full of dirty or writeback pages and direct reclaim cannot clean those up.
zone_reclaimable allows to rescan the reclaimable lists several times and restart if a page is freed. This is really subtle behavior and it might lead to a livelock when a single freed page keeps allocator looping but the current task will not be able to allocate that single page. OOM killer would be more appropriate than looping without any progress for unbounded amount of time.
This patch changes OOM detection logic and pulls it out from shrink_zone which is too low to be appropriate for any high level decisions such as OOM which is per zonelist property. It is __alloc_pages_slowpath which knows how many attempts have been done and what was the progress so far therefore it is more appropriate to implement this logic.
The new heuristic is implemented in should_reclaim_retry helper called from __alloc_pages_slowpath. It tries to be more deterministic and easier to follow. It builds on an assumption that retrying makes sense only if the currently reclaimable memory + free pages would allow the current allocation request to succeed (as per __zone_watermark_ok) at least for one zone in the usable zonelist.
This alone wouldn't be sufficient, though, because the writeback might get stuck and reclaimable pages might be pinned for a really long time or even depend on the current allocation context. Therefore there is a backoff mechanism implemented which reduces the reclaim target after each reclaim round without any progress. This means that we should eventually converge to only NR_FREE_PAGES as the target and fail on the wmark check and proceed to OOM. The backoff is simple and linear with 1/16 of the reclaimable pages for each round without any progress. We are optimistic and reset counter for successful reclaim rounds.
Costly high order pages mostly preserve their semantic and those without __GFP_REPEAT fail right away while those which have the flag set will back off after the amount of reclaimable pages reaches equivalent of the requested order. The only difference is that if there was no progress during the reclaim we rely on zone watermark check. This is more logical thing to do than previous 1<<order attempts which were a result of zone_reclaimable faking the progress.
[[email protected]: check classzone_idx for shrink_zone] [[email protected]: separate the heuristic into should_reclaim_retry] [[email protected]: use zone_page_state_snapshot for NR_FREE_PAGES] [[email protected]: shrink_zones doesn't need to return anything] Signed-off-by: Michal Hocko [email protected] Acked-by: Hillf Danton [email protected] Cc: Vladimir Davydov [email protected] Cc: Johannes Weiner [email protected] Cc: David Rientjes [email protected] Cc: Joonsoo Kim [email protected] Cc: Mel Gorman [email protected] Cc: Tetsuo Handa [email protected] Cc: Vlastimil Babka [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected] Signed-off-by: Reinazhard [email protected] Change-Id: Iaa284d3f686617aae45cb2976e541beb84eda26d
oom: keep mm of the killed task available
oom_reap_task has to call exit_oom_victim in order to make sure that the oom vicim will not block the oom killer for ever. This is, however, opening new problems (e.g oom_killer_disable exclusion - see commit 74070542099c ("oom, suspend: fix oom_reaper vs. oom_killer_disable race")). exit_oom_victim should be only called from the victim's context ideally.
One way to achieve this would be to rely on per mm_struct flags. We already have MMF_OOM_REAPED to hide a task from the oom killer since "mm, oom: hide mm which is shared with kthread or global init". The problem is that the exit path:
do_exit exit_mm tsk->mm = NULL; mmput __mmput exit_oom_victim
doesn't guarantee that exit_oom_victim will get called in a bounded amount of time. At least exit_aio depends on IO which might get blocked due to lack of memory and who knows what else is lurking there.
This patch takes a different approach. We remember tsk->mm into the signal_struct and bind it to the signal struct life time for all oom victims. __oom_reap_task_mm as well as oom_scan_process_thread do not have to rely on find_lock_task_mm anymore and they will have a reliable reference to the mm struct. As a result all the oom specific communication inside the OOM killer can be done via tsk->signal->oom_mm.
Increasing the signal_struct for something as unlikely as the oom killer is far from ideal but this approach will make the code much more reasonable and long term we even might want to move task->mm into the signal_struct anyway. In the next step we might want to make the oom killer exclusion and access to memory reserves completely independent which would be also nice.
Link: http://lkml.kernel.org/r/[email protected] Signed-off-by: Michal Hocko [email protected] Cc: Tetsuo Handa [email protected] Cc: Oleg Nesterov [email protected] Cc: David Rientjes [email protected] Cc: Vladimir Davydov [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]
was fixed a galaxies views bag
omg a stupid mistake... minus 4 hours of my life(((
fuck you git add .git add .wooooooogit add .git add .git add .
Explicitly show ally foes
This shows ally targeting info: (i) on the ally, (ii) on the target monsters, and (iii) as a mi flag on the short description (without specifics). For simple cases this won't show anything surprising at all to experienced players, but I think it still is useful info to new players. For complex cases, it may be useful to see e.g. that an ally is pathing through a monster that's not what you'd hope it to be targeting.
I suspect this hasn't been done before because of information overload issues. And monster descriptions definitely have been accruing information overload for the last few versions! However, I think I have converted to team info, and I do think this specific bit of info is useful for many cases. It also may have not been done before because there are cases where what is shown is weird and confusing. However, this reflects weird and confusing ally behaviour, which is still frustrating to players when it is detected. This commit may make it easier to detect, which is perhaps the path towards less confusing ally behavior. I won't be surprised if it turns up some bugs too...
It doesn't seem impossible to me that this could turn out to be a bad idea to show after all, but we'll see.
Slowly and painfully working on the admin panel UI. Having troubles figuring out the best layout. Note to self - use a CMS next time.
chore: no more lodash
All it is used for can now be achieved by vanilla JS.
Good bye, my love! I will remember you forever.
chore: Make IPFS instances DRY way (#979)
-
chore: extract common createIPFS function for tests
-
chore: do swarm.connect without new variables
-
chore: run core tests sequentially
No time gains
- chore: uncomment fixed test
Multi-query has a weird behavior regarding timeouts
-
chore: proper ceramic instance management on ceramic-api.test.ts
-
chore: no more lodash
All it is used for can now be achieved by vanilla JS.
Good bye, my love! I will remember you forever.
Reaction rates, pH, purity and more! Brings a heavily improved, less explosive and optimised fermichem to tg. (#56019)
Brings a heavily improved, rewritten, and optimised fermichem to tg. I saw that tg seemed receptive to it, so I thought I’d do it myself. If you know of fermichem – there’s a lot changed and improved, so looking at other documents regarding it will not be accurate.
Revamps the main chemistry reaction handler to allow for over time reactions instead of instant reactions. This revamp allows for simultaneous reactions, exo/endothermic reactions and pH consuming/producing behaviours. Most of the reactions in game will now inherit an easy one size fits all reaction.
Temperature mechanics
Temperature affects reaction rate
The higher it is, the faster it is, but be careful, as chem reactions will perform special functions when overheated (presently it DOESN’T explode)
Temperature will increase or decrease depending on the exo/endothermic nature of the reaction
pH mechanics
Each reaction requires the pH of a beaker to be within a certain range.
If you are outside of the optimal, you'll incur impurity, which has a negative effect on the resultant chem
pH of a beaker will change during a reaction
Reacting Impure chem effects can vary from chem to chem, but for default will reduce the purity of other reagents in the beaker
Consuming an impure chem will either cause liver or tox damage dependant on how impure it is as well as reducing consumed volume
Purity can (presently) only be seen with a chemical analyser
Impure chems can purposely be made by making the reagent with a low, but not explosive, purity.
A chem made under the PurityMin will convert into the reagent’s failed chem in the beaker.
Optional catalysts
Reactions can use an optional catalyst to influence the reaction - at the more framework exists from tmeprature, reaction rate and pH changes as a result of a catalyst. Catalysts can be set to only work on a specific reagent subtype. It is preferable to those building upon this code that optional catalysts only affect a subsection of reagents.
Presently the only catalyst that uses this is Palladium synthate catalyst - a catalyst that increases the reaction speed of medicines.
Reaction agents
These are reagents that will consume themselves when added to a beaker - even a full one, and apply effects to the total solution. One example being Tempomyocin which will speed up a reaction, or the buffer reagents which change the pH.
Competitive reactions
These reactions will go towards a certain product depending on the conditions of the holder. The example one given is a little tricky and requires a lot of temperature to push it towards one end. New and charged reactions
(see the wiki for details)
Acidic /basic buffer - These reagents will adjust the pH of a beaker/solution when added to one. If the beaker is empty it will fill it instead.
Tempomyocin - This will instantly speed up any reaction added it is added to, giving it a short burst of speed. Adding this reagent to a reaction will give it a suddent speed boost up to 3x times - with the output purity of the boost modified by the Tempomyocin's purity.5u per 100u will give you 2x, 10 u per 100u will give you 3x. IIt caps at 3x for a single addition, but there is nothing preventing you from adding multiple doses for multiple boosts.
Purit tester - this will fizzle if the solution it is added to has an inverse purity reagent present.
A few other reactions have been tweaked to make sure they work too. An example being meth - see the wikipage linked above. A note on all reactions
The one size fits all reaction for all chems generally won’t create impure chems – it is very forgiving. The only thing to remember is to avoid heating reactions over 900 or you’ll reduce your yield, and try to keep your pH between 5 -9.
This PR doesn’t have specific example chems included (except for the buffers) – they will be atomised out and they use the mechanics in more depth A note on plumbing
I reached out to Time Green and we worked together to make sure plumbing was fine. Time Green did some of his own tests too, and surprisingly it doesn't look like much needs to be changed.
chore(build): Switch to cmake as a configuration system (#125)
This PR switches away from raw makefiles as a configuration and build system and uses cmake instead. Why?
We're about to enter territory that would be remarkably painful to tread upon with make: for upcoming module projects, we'll compile both for the target (probably with gcc) and the host (probably with clang); we'll also use other tools like clang sanitizers. We'll build in testing, which means - for native code projects - compiling different executables. All of this is hard to do in make, and the ways that you work around it typically mean abandoning the places where Make can help you the most.
Raw makefiles are pretty good at replacing shell scripts as a build system. They give you dependency management, they give you some built-ins for managing compiler options for compiler options. The farther you go from using them as a replacement for shell scripts, the worse they get. In particular, they really start to drag when the configuration of a build becomes more complex, and especially when there are multiple complex configurations. The typical way to avoid this in raw Make is to start avoiding using Make's builtin rules (so you don't have to, e.g., switch around $CC) or have increasingly complex switches around environment variables.
The other route that people take is to use a config system. There are a lot of these: kconfig, buildroot's config which somehow also runs under make in bizarre ways, IDE project configuration (which therefore ties you to an IDE), others.
But there's also cmake. Cmake is an ugly (just check out the code!) configuration system to look at, and its overeager attempts to help you out can occasionally get in your way, but it is fundamentally a solid, well-designed, extremely well-documented opinionated multi-platform native code configuration system that is widely used, integrates well with IDEs, and fixes the issues I noted above. OK but why change what works?
Well, unfortunately, cmake's core desire is to compute a bunch of configuration and emit... Makefiles. Which means it's very hard to use alongside a different Makefile based system. Also I needed the practice, quite honestly. So what changed?
opentrons-modules in this pr uses cmake as a configuration system, generating (currently) makefiles.
That means that the logic that used to be in the top-level Makefile is now spread out in the CMakeLists.txt files that exist in the source tree. Each file is responsible for building what is below it, in the parallel builds directory. How do I use this?
(Check out the README)
The biggest usability downside of cmake is that it has two steps: configure, and build. You can do them both with the cmake command, which is your friend. The specific cmake you want to use is CMake 3.19, which you can find here. We want 3.19, which is the latest release, because of the new "presets" concept which allows much easier (MUCH easier) configuration of things like cross-compilers.
So first you configure: cmake . And then you build: cmake --build ./builds
You can also use the generated Makefiles to build - cd builds && make.
To bundle up all the modules into a specific package, run cmake --install ./builds.
When you run the configuration step, cmake will automatically download the arduino IDE and configure it for you.