Skip to content

Latest commit

 

History

History
2100 lines (1390 loc) · 99.3 KB

2022-05-23.md

File metadata and controls

2100 lines (1390 loc) · 99.3 KB

< 2022-05-23 >

1,733,664 events recorded by gharchive.org of which 1,733,664 were push events containing 2,797,885 commit messages that amount to 205,125,452 characters filtered with words.py@e23d022007... to these 45 messages:

Monday 2022-05-23 00:25:38 by 1212-5858

C16-93715 - 1516523 from 93715 ? ##1 and for reasonable CAUSE -

just like Shari right ---- the trouble maker who owes $500 million in back taxes on 6 properties...

look your 100 billion, doesn't cover it.

  • You fight fair, and NO INSIDER INFORMATION - or I'll knock you off the BLOCK.

capiche?

  • swaps allowed, futures also.

/s/ BD.

gtg.

you changed your CRD in the new filer BTW... let the regulators know to switch it BACK to the 8209

Branch: refs/heads/REQUEST-TO-BAR

Home: https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER

Commit: 80bec9c83359c85563c8847bfef867c4cdb180cb

https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER/commit/80bec9c83359c85563c8847bfef867c4cdb180cb

Author: 1212-5858 [email protected]

Date: 2022-05-22 (Sun, 22 May 2022)

Changed paths:

A 1516523-100-C16-93715

Log Message:


C16-93715 1516523 [ SFITX STFGX STFBX SFBDX ] #1

ATTN: STATE FARM

ONE STATE FARM PLAZA, BLOOMINGTON, IL, 61710

https://github.com/users/BSCPGROUPHOLDINGSLLC/projects/1

** YOU OWN THE TAX LIABILITIES ON THOSE PROPERTIES --- ALL 6 OF THEM

** YOU ALSO TOOK ON THE LIABILITIES OF THE "LEASES AND RENTS" TRANSFFERRED?? OR USED AS A GUARANTEE FOR A LETTER OF CREDIT.

"https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=fXMaXgeyzvA85ViWMmvfAQ=="

yOU UNDERSTAND WHAT YOU'RE PLAYING WITH - OR DID I MARK YOUR INTEREST EARLIER AS WELL..

LET me know if your counselors have an opinion on this as well.

[ C16 - DEALINGS -- UNFAIR DEALINGS AND INSIDER INFORMATION]

ATTACHED.

#GOCARDS

https://github.com/users/BSCPGROUPHOLDINGSLLC/projects/1#column-18309490

^ tax receipts

https://www.sec.gov/comments/s7-14-18/s71418-4531826-176079.pdf

VIOLATION OF PRIVACY

https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER/pull/7

HOW MANY FIRMS DOES IT TAKE TO LOSE TWO BILLION DOLLARS OF MARKET CAP - AND HIDE IT FROM THE INVESTORS...

https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER/commit/3062fcf9c989174cbc76ea0b7d8135cf32eef8f4

THATS TWO BILLION USD.

NOT INCLUDING THE 9 BILLION IN ASSETS UNDER MANAGEMENTS LOST, AND NO LONGER UNDER FILER 93715.

  • PRIOR TO LITIGATION AND REGULATORY FINES???

https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER

WARNED THEM IN JUNE OF 2020 - THAT THERE IS ABOUT $9 BILLION AT RISK.

https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=TxAa7cNVIHKtnJU/ni/zvg==

https://saaze2311prdsra.blob.core.windows.net/clean/61f910a979d5ec11a7b5000d3a1af965/2020-06-03%20Notice%20and%20Obstruction.png

TRIED TO HAVE STATE FARM RESPOND, IN A CHANGE OF CAPTION, AND NOTIFIED ALL RELEVANT PARTIES ON THE 8TH OF AUGUST IN 2020.

https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=Xjn0/e1NcBADqRc_PLUS_g11P4g==

FILED A TCR REPORT WITH THE SECURITIES AND EXCHANGE COMMISSION ON NOVEMBER 13TH, 2021

[ _TCRReport (1).pdf ]

NOTIFIED THE PROMOTERS OF STATE FARM ON NOVEMBER 16TH, 2021

HOW MANY PEOPLE DOES IT TAKE TO DO THE OPPOSITE?

https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER/commit/d7daa60e1e93abf4098d905770c945a2957c46c8

June 3, 2020

https://saaze2311prdsra.blob.core.windows.net/clean/61f910a979d5ec11a7b5000d3a1af965/2020-06-03%20Notice%20and%20Obstruction.png

February 20, 2022

Your fax (ID: #30666994) to IRS CRIMINAL INVESTIGATIONS at 267-466-1115

" has been delivered successfully at 11:44 PM Eastern Daylight Time"

https://faxzero.com/status/30666994/5790f17018611119e07814be9e36110d164afaa6

per the holding report filed by Terrence Ludwig, Paul J Smith and other compliance officers and Directors of:

State Farm VP Management Corp.,

I am certain those securities reported in the semi-annual holding reports were traded by Morgan Stanley without any disclosure to other market participants, which I understand as:

INSIDER TRADING AND UNFAIR DEALINGS.

WHEN I am CERTAIN

I do NOTIFY parties which in part is a failure on the PART OF THE CURRENT "PROMOTERS"

  1. to make these undisclosed / unregistered securities public and make them available to the General PUBLIC for consideration prior to making an investment.
  • and also in the Central Registration Depository by the Representatives at State Farm VP. FIRM 43036...

SO HOW EXACTLY IS IT THESE MORALLY ARE DECISIONS BY THE "PROMOTERS" of a $10 BILLION DOLLAR "State Farm Associates' Funds Trust"

COMPLIANCE OFFICER & TREASURER ALSO CERTIFIED UNDER THESE DOCTRINES HERE under the Sarbanas-Oxley and the Securities and Exchange Act of 1940, HAVE NOT DEALT FAIRLY WITH PRICE WATERHOUSE, IN THEIR CONCERTED EFFORTS OF OBSTRUCTION AND UNFAIR DEALINGS… HAVE CREATED AN EVEN LARGER PROBLEM WHICH I ALSO TRIED TO MITIGATE IN THEIR OMISSIONS.

WITH THE HELP OF THE ZUCKER FAMILY AND ITS COUNSELORS, THESE LOSSES WILL CONTINUE AND MORE SPECIFICALLY IN THE “EVASIVE” STRATEGIES EMPLOYED BY THE COUNSELORS AND ADVISORS OF THE FORMER AND CURRENT STATE FARM FUNDS HEADQUARTER, MANAGED AT:

ONE STATE FARM PLAZA, BLOOMINGTON, IL, 61710

TO CEASE AND DESIST.

GTG/

  1. I REQUESTED AN ESTOPPEL: STATE FARM

https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=s5WAeCnxmd/hcOI4eTnbig==

  1. I REQUESTED AN ESTOPPEL: THE ZUCKER FAMILY & ITS COUNSELORS.

https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=Jf3Un/JaVXZwF7kvbaee4w==

It's all about who you have working in your best interests - you know what that is right Jamoe???

  • You're traders have to cheat AND pay extra to keep an eye on me, all 25,000 of them - but guess what.

--- I can carry them too... ALL OF THEM

https://github.com/BSCPGROUPHOLDINGSLLC/ELSER-AND-DICKER/commit/d7daa60e1e93abf4098d905770c945a2957c46c8

  • HEY IF YOU ARE LOOKING FOR AN EARLY RETIREMENT, JUST SAY THE WORD BUDDY.. PLUS, THAT NAME HASN'T CHANGED IN A WHILE HAS IT...

Monday 2022-05-23 00:36:30 by Farie82

Makes setting a machine GC properly if not unset properly (#17840)

  • Makes setting a machine GC properly if not unset properly

  • Forgot one. Fuck you borer code


Monday 2022-05-23 01:01:10 by Yuki Okushi

Rollup merge of #97144 - samziz:patch-1, r=Dylan-DPC

Fix rusty grammar in std::error::Reporter docs

Commit

I initially saw "print's" instead of "prints" at the start of the doc comment for std::error::Reporter, while reading the docs for that type. Then I figured 'probably more where that came from', so, as well as correcting the foregoing to "prints", I've patched up these three minor solecisms (well, two types, three tokens):

  • One use of the indicative which should be subjunctive - indeed the sentence immediately following it, which mirrors its structure, does use the subjunctive (L871). Replaced with the subjunctive.
  • Two separate clauses joined with commas (L975, L1023). Replaced the first with a semicolon and the second with a period. Admittedly those judgements are pretty much 100% subjective, based on my sense of how the sentences flowed into each other (though ofc the replacement of the comma itself is not subjective or opinion-based).

I know this is silly and finicky, but I hope it helps tidy up the docs a bit for future readers!

PR notes

This is very much non-urgent (and, honestly, non-important). I just figured it might be a nice quality-of-life improvement and bit of tidying up for the core contributors themselves not to have to do. 🙂

I'm tagging Steve, per the contributing guidelines ("Steve usually reviews documentation changes. So if you were to make a documentation change, add r? @steveklabnik"):

r? @steveklabnik


Monday 2022-05-23 01:15:32 by Trent W. Buck

dvdrip: remove needless timestamp

16:13 self.dvd_title = f'{self.dvd_title or "Unknown"} {datetime.datetime.today()}' 16:13 That should be date.today(), but I want to remove it completely 16:13 the tvserver adds another timestamp, so I don't think it's needed 16:14 REDACTED: any opinion? 16:14 Hmmm, I think the timestamp should be added as soon as possible to avoid 2 rip jobs trampling on each other 16:14 REDACTED: it's already using temp dirs. 16:15 It just means that if two staff have the same disc, and try to rip both discs on the same day, the second one will go "oh, it already exists in the .ripped queue" and (now) cleanup after itself 16:16 If we're reliably handling errors, then yeah fine, it can go, I think. Because realistically they'll be the same regardless of date ripped. But make sure you're not overwriting things without checking 16:16 Whereas what happens at AMC is they end up re-rippign the same dvd 11 fuckign times 16:16 Ok, cleanup itself is the important part 16:16 Yeah 16:16 Anyway the desktop side of this I'll check today incidentally as part of this


Monday 2022-05-23 02:15:10 by chris

Blades of mercy, loosely based on Bloodborne blades of mercy + shaman bone blades.

Apply to separate into a blade of grace and a blade of pity. -Blades of mercy can use short sword or axe skill -Blades of grace can use short sword, axe, or dagger skill. -Blades of pity use dagger skill.

Physical damage has .5x str, 1x dex, .5x and int scaling

Insight weapon -Once the veil is pierced, starts gaining extra magic damage for insight. +2% per point, capping at +100% -At 25 insight, also gains bonus damage from Cha (2% per point). --Total bonus damage is therefore 2.5x vs. non-magic-resistant monsters. -Also at 25 insight, lowers target to-hit and damage. -At 50 insight, causes the target to attack your other enemies. --If the target lives through the blow, it gets a resistance check to avoid making the extra attack. --If it dies, it makes the attack before dying. --Attacker gets bonus to hit and damage based on your insight and cha. --If the target lived through the blow and killed an ally, it may go insane (gets a resist roll to avoid). --Also works vs. you, and you will attack pets, and your god will get angry if you kill them.

All bloodborne inspired weapons can reach +10, including artifacts.

This is the "generic madwoman" weapon, it's why they get the seemingly-useless dagger and two-weapon skill when opening their box, vs. the "generic madman"'s (rakuyo) saber skill.

Deminymphs may get this or a rakuyo, forming a "Hunter" kit. They may now spawn with the blades joined or separated.

Player monsters updated as well. Also, non-astral madmen get axes instead of stilettos.

Fixes a bug where negative encouragement values weren't shown with stethoscopes.

Xhity uses a boolean to trigger the mercy effect


Monday 2022-05-23 02:17:42 by PIZZA

Add files via upload

i'm tired i really can't do it anymore can't believe that i started this project up in 2020

i ain't working on this piece of crap any longer ace now has full ownership of the project, both original and legacy of course i'll still work on the model to clean up faces and shit but i think ace would be much better at getting a team to help out that way i don't have to work on it on my own his team would be able to help out so yeah i'm giving up ownership to ace bye it's literally 9:16 pm

i'll release kyle's model as soon as part 2 even officially comes out


Monday 2022-05-23 04:08:15 by bors

Auto merge of #12294 - listochkin:prettier, r=Veykril

Switch to Prettier for TypeScript Code formatting

Summary of changes:

  1. Added .editorconfig file to dictate general hygienic stuff like character encoding, no trailing whitespace, new line symbols etc. for all files (e.g. Markdown). Install an editor plugin to get this rudimentary formatting assistance automatically. Prettier can read this file and, for example, use it for indentation style and size.
  2. Added a minimal prettier config file. All options are default except line width, which per Veykril suggestion is set to 100 instead of 80, because that's what Rustfmt uses.
  3. Change package.json to use Prettier instead of tsfmt for code formatting.
  4. Performed initial formatting in a separate commit, per bjorn3 suggestion added its hash to a .git-blame-ignore-revs file. For it to work you need to add a configuration to your git installation:
    git config --global blame.ignoreRevsFile .git-blame-ignore-revs
  5. Finally, removed typescript-formatter from the list of dependencies.

What follows below is summary of the discussion we had on Zulip about the formatter switch:

Background

For the context, there are three reasons why we went with tsfmt originally:

  • stick to vscode default/built-in
  • don't add extra deps to package.json.lock
  • follow upstream (language server node I think still uses tsfmt)

And the meta reason here was that we didn't have anyone familiar with frontend, so went for the simplest option, at the expense of features and convenience.

Meanwhile, Prettier became a formatter project that JS community consolidated on a few years ago. It's similar to go fmt / cargo fmt in spirit: minimal to no configuration to promote general uniformity in the ecosystem. There are some options, that were needed early on to make sure the project gained momentum, but by no means it's a customizable formatter that is easy to adjust to reduce the number of changes for adoption.

Overview of changes performed by Prettier

Some of the changes are acceptable. Prettier dictates a unified string quoting style, and as a result half of our imports at the top are changed. No one would mind that. Some one-line changes are due to string quotes, too, and although these a re numerous, the surrounding lines aren't changed, and git blame / GitLens will still show relevant context.

Some are toss ups. trailingComma option - set it to none, and get a bunch of meaningless changes in half of the code. set it to all and get a bunch of changes in the other half of the code. Same with using parentheses around single parameters in arrow functions: x => x + 1 vs (x) => x + 1. Perrier forces one style or the other, but we use both in our code.

Like I said, the changes above are Ok - they take a single line, don't disrupt GitLens / git blame much. The big one is line width. Prettier wants you to choose one and stick to it. The default is 80 and it forces some reformatting to squish deeply nested code or long function type declarations. If I set it to 100-120, then Prettier finds other parts of code where a multi-line expression can be smashed into a single long line. The problem is that in both cases some of the lines that get changed are interesting, they contain somewhat non-trivial logic, and if I were to work on them in future I would love to see the commit annotations that tell me something relevant. Alas, we use some of that.

Project impact

Though Prettier is a mainstream JS project it has no dependencies. We add another package so that it and ESLint work together nicely, and that's it.


Monday 2022-05-23 04:22:24 by Jamie D

Adds APC and different areas for the multiple air alarms.. why could you siphon interrogation from perma.. (#14163)

  • Update Space_Station_13_areas.dm

  • Fixes Brig to not be Shit

  • Fixes Areastring

  • other maps

  • Update code/game/area/Space_Station_13_areas.dm

  • Fucking hate baiomu so much

  • fucking apc


Monday 2022-05-23 04:22:24 by TheRyeGuyWhoWillNowDie

Makes bloodbrothers start with the makeshift weapons book learned. (Jamie Edition) (#14094)

  • makes blood brothers a bit less shit

  • oopsie

  • improve???

  • what

  • huh??


Monday 2022-05-23 05:06:17 by Kayla

fix the segmentation fault thank you tuna (still, fuck you)


Monday 2022-05-23 06:02:06 by Tom Lane

Fix rowcount estimate for SubqueryScan that's under a Gather.

SubqueryScan was always getting labeled with a rowcount estimate appropriate for non-parallel cases. However, nodes that are underneath a Gather should be treated as processing only one worker's share of the rows, whether the particular node is explicitly parallel-aware or not. Most non-scan-level node types get this right automatically because they base their rowcount estimate on that of their input sub-Path(s). But SubqueryScan didn't do that, instead using the whole-relation rowcount estimate as if it were a non-parallel-aware scan node. If there is a parallel-aware node below the SubqueryScan, this is wrong, and it results in inflating the cost estimates for nodes above the SubqueryScan, which can cause us to not choose a parallel plan, or choose a silly one --- as indeed is visible in the one regression test whose results change with this patch. (Although that plan tree appears to contain no SubqueryScans, there were some in it before setrefs.c deleted them.)

To fix, use path->subpath->rows not baserel->tuples as the number of input tuples we'll process. This requires estimating the quals' selectivity afresh, which is slightly annoying; but it shouldn't really add much cost thanks to the caching done in RestrictInfo.

This is pretty clearly a bug fix, but I'll refrain from back-patching as people might not appreciate plan choices changing in stable branches. The fact that it took us this long to identify the bug suggests that it's not a major problem.

Per report from bucoo, though this is not his proposed patch.

Discussion: https://postgr.es/m/[email protected]


Monday 2022-05-23 07:05:10 by Peter Zijlstra

sched/core: Fix ttwu() race

Paul reported rcutorture occasionally hitting a NULL deref:

sched_ttwu_pending() ttwu_do_wakeup() check_preempt_curr() := check_preempt_wakeup() find_matching_se() is_same_group() if (se->cfs_rq == pse->cfs_rq) <-- BOOM

Debugging showed that this only appears to happen when we take the new code-path from commit:

2ebb17717550 ("sched/core: Offload wakee task activation if it the wakee is descheduling")

and only when @cpu == smp_processor_id(). Something which should not be possible, because p->on_cpu can only be true for remote tasks. Similarly, without the new code-path from commit:

c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu")

this would've unconditionally hit:

smp_cond_load_acquire(&p->on_cpu, !VAL);

and if: 'cpu == smp_processor_id() && p->on_cpu' is possible, this would result in an instant live-lock (with IRQs disabled), something that hasn't been reported.

The NULL deref can be explained however if the task_cpu(p) load at the beginning of try_to_wake_up() returns an old value, and this old value happens to be smp_processor_id(). Further assume that the p->on_cpu load accurately returns 1, it really is still running, just not here.

Then, when we enqueue the task locally, we can crash in exactly the observed manner because p->se.cfs_rq != rq->cfs_rq, because p's cfs_rq is from the wrong CPU, therefore we'll iterate into the non-existant parents and NULL deref.

The closest semi-plausible scenario I've managed to contrive is somewhat elaborate (then again, actual reproduction takes many CPU hours of rcutorture, so it can't be anything obvious):

				X->cpu = 1
				rq(1)->curr = X

CPU0				CPU1				CPU2

				// switch away from X
				LOCK rq(1)->lock
				smp_mb__after_spinlock
				dequeue_task(X)
				  X->on_rq = 9
				switch_to(Z)
				  X->on_cpu = 0
				UNLOCK rq(1)->lock

								// migrate X to cpu 0
								LOCK rq(1)->lock
								dequeue_task(X)
								set_task_cpu(X, 0)
								  X->cpu = 0
								UNLOCK rq(1)->lock

								LOCK rq(0)->lock
								enqueue_task(X)
								  X->on_rq = 1
								UNLOCK rq(0)->lock

// switch to X
LOCK rq(0)->lock
smp_mb__after_spinlock
switch_to(X)
  X->on_cpu = 1
UNLOCK rq(0)->lock

// X goes sleep
X->state = TASK_UNINTERRUPTIBLE
smp_mb();			// wake X
				ttwu()
				  LOCK X->pi_lock
				  smp_mb__after_spinlock

				  if (p->state)

				  cpu = X->cpu; // =? 1

				  smp_rmb()

// X calls schedule()
LOCK rq(0)->lock
smp_mb__after_spinlock
dequeue_task(X)
  X->on_rq = 0

				  if (p->on_rq)

				  smp_rmb();

				  if (p->on_cpu && ttwu_queue_wakelist(..)) [*]

				  smp_cond_load_acquire(&p->on_cpu, !VAL)

				  cpu = select_task_rq(X, X->wake_cpu, ...)
				  if (X->cpu != cpu)
switch_to(Y)
  X->on_cpu = 0
UNLOCK rq(0)->lock

However I'm having trouble convincing myself that's actually possible on x86_64 -- after all, every LOCK implies an smp_mb() there, so if ttwu observes ->state != RUNNING, it must also observe ->cpu != 1.

(Most of the previous ttwu() races were found on very large PowerPC)

Nevertheless, this fully explains the observed failure case.

Fix it by ordering the task_cpu(p) load after the p->on_cpu load, which is easy since nothing actually uses @cpu before this.

Fixes: c6e7bd7afaeb ("sched/core: Optimize ttwu() spinning on p->on_cpu") Reported-by: Paul E. McKenney [email protected] Tested-by: Paul E. McKenney [email protected] Signed-off-by: Peter Zijlstra (Intel) [email protected] Signed-off-by: Ingo Molnar [email protected] Link: https://lkml.kernel.org/r/[email protected]


Monday 2022-05-23 07:57:47 by Tyler

New Binding System (Big Effort)

Three weeks of nights and weekends, and it has finally come together. I'm sure there's still some dragons and pain points in this code, but the architecture is now where I want it for production use.

A big change was making renderers that only used a subset of the skeleton to work correctly without uploading the skeleton multiple times. Originally, my plan was to hash bindposes and do incremental updates. But after seeing how ugly that got, I started experimenting with moving the bindposes to the GPU. That paid off! Because it also unified the pipeline between compute skinning and what will be LBS vertex skinning. The compute shader handles setting up both. This does come with the cost of limiting the number of bones and transforms in a hierarchy. We'll see if that matters.

RenderDoc was a hero during debugging. Being able to inspect the buffers and find out I wasn't uploading vertices to the ranges I expected, as well as other stupid bugs, I don't think I would have gotten this working without it.

The optimized code path is working again. I tested imports pre-posed and it seems to handle that too. Of course the real highlight was throwing 100,000 dancers at it now that VRAM isn't an issue. 24 FPS! I bet someone with a better computer than me can make it hit 60 FPS.

Now it is just vertex LBS skinning, a custom allocator for animation buffers, and the animation clip plugin. Then beta while I write docs, and finally a release!

I'll announce the next alpha after vertex LBS skinning is in place.


Monday 2022-05-23 08:51:48 by Tim

Change healing by sleeping to be affected by sanity, darkness (or blindfold), and earmuffs. (#65713)

About The Pull Request

Depending on the mob's sanity level, it can have a positive or negative boost to healing effects while sleeping. Sleeping in darkness, wearing a blindfold, and using earmuffs also counts as a healing bonus. Beauty sleep is very important for 2D spessmen. Why It's Good For The Game

This is a small gameplay change that rewards players for keeping their sanity at good levels. Also depression has also been linked with impeding wound healing in real life. The placebo effect on peoples minds is strenuously documented and I think it would be cool to see it in the game. Changelog

cl expansion: Healing by sleeping is now affected by sanity, sleeping in darkness (or using a blindfold), and using earmuffs. The healing from sleeping in a bed was slightly decreased. /cl


Monday 2022-05-23 10:02:16 by ElenoreGill

added properties

this is hell, nothing works, i have no time left and i want to cry. why wont my bloody image be well behaved and just go where i want it to, why does it have to be so quirky?!?!? i hate css it sucks


Monday 2022-05-23 10:22:03 by treckstar

People listen up don't stand so close, I got somethin that you all should know. Holy matrimony is not for me, I'd rather die alone in misery.


Monday 2022-05-23 10:36:14 by Marko Grdinić

"9:30am. Am just chilling. Let me do it for a while longer and then I will start. Rosen Garten is out.

10:15am. Let me start. First thing's first, let me check out the Moi thread.

https://moi3d.com/forum/discussion.php?webtag=MOI&msg=10705.1

10:25am. From the way I've been using the pen I am starting to feel some hand muscle issues. I need to relax it as much as possible. And try to use the elbow and shoulder more. Maybe it would not be bad to take a day off from this.

Nevermind that. I am only a day from being done. Let me investigate those boxes.

10:40am. Let me try opening the gaps myself as an exercise. After that I'll get started on modeling the rest of the items.

///

10705.6 In reply to 10705.1 Hi Markog, in the attached version I've opened up that little bit of space in between the flaps by copying edge lines and using Edit > Trim and also cutting out the margin in the corners as well.

With this setup you should now be able to shell the whole thing ok, set Shell thickness = 0.05 and set Direction: Flip.

The spacing helps to keep the flaps from running in to each other.

///

I am not really sure how it would be possible to do this efficiently using trim. But drawing rectangles and using boolean diff works quite well. Let me pause this for a bit. I want to check out the usb just a tad. I remembered an idea from last night.

11:10am. It feels like too much of my time is lost due to posting stuff. At any rate I've tried out my idea. I misremembered - 3 pt arc do not have the eliptical feature. They are already naturally that way.

But conics are really good, that are exactly what I need for what I had in mind. Let me rebuild the body of the USB...no forget it. I put a chamfer on the rest. It is not worth it for me to mess with this.

11:20am. Let me deal with the other box. Focus me. Actually lret me repeat what I've done on the first box. The gap needs to be a bit wider.

11:35am. Ah damn, sliding along a rectangle can sometimes not do a clean job.

12pm. Shit, now I am losing time on this. Why is the gap not wide enough. Doing this kind of thing in Zbrush would allow me to avoid these kinds of issues. I should just leave the box for the later stage instead of bashing my head with Moi.

12:20pm. I finally did the box. It took me far to long to realize that Flip means the flip normal direction. Anyway, what is next?

12:30pm. Ah shit, I just realized I could have done those stripes on the router using inset. What I did was shell a copy of it, take the intersections, and the diff it with the original which is a lot more long winded. Instead I should have used trim to cut in the new faces and inset those isntead.

12:35pm. I am distracted. I need to get started. But actually this is a good place to stop for the morning. Let me have breakfast here."


Monday 2022-05-23 10:51:04 by Mirek Kratochvil

avoid using @sync_add on remotecalls (#44671)

  • avoid using @sync_add on remotecalls

It seems like @sync_add adds the Futures to a queue (Channel) for @sync, which in turn calls wait() for all the futures synchronously. Not only that is slightly detrimental for network operations (latencies add up), but in case of Distributed the call to wait() may actually cause some compilation on remote processes, which is also wait()ed for. In result, some operations took a great amount of "serial" processing time if executed on many workers at once.

For me, this closes #44645.

The major change can be illustrated as follows: First add some workers:

using Distributed
addprocs(10)

and then trigger something that, for example, causes package imports on the workers:

using SomeTinyPackage

In my case (importing UnicodePlots on 10 workers), this improves the loading time over 10 workers from ~11s to ~5.5s.

This is a far bigger issue when worker count gets high. The time of the processing on each worker is usually around 0.3s, so triggering this problem even on a relatively small cluster (64 workers) causes a really annoying delay, and running @everywhere for the first time on reasonable clusters (I tested with 1024 workers, see #44645) usually takes more than 5 minutes. Which sucks.

Anyway, on 64 workers this reduces the "first import" time from ~30s to ~6s, and on 1024 workers this seems to reduce the time from over 5 minutes (I didn't bother to measure that precisely now, sorry) to ~11s.

Related issues:

  • Probably fixes #39291.
  • #42156 is a kinda complementary -- it removes the most painful source of slowness (the 0.3s precompilation on the workers), but the fact that the wait()ing is serial remains a problem if the network latencies are high.

May help with #38931

Co-authored-by: Valentin Churavy [email protected] (cherry picked from commit 62e0729dbc5f9d5d93d14dcd49457f02a0c6d3a7)


Monday 2022-05-23 11:06:21 by L1F20ASCS0028

no class. stupid as fuck regulatons and more idiotic gaurds


Monday 2022-05-23 12:02:03 by 1212-5858

93715 --- nov13 and Dec18 2021 - could have saved about 1.5 billion dollars by switching to BBO.

ATTN: MR. MOORE & CO. ONE STATE FARM PLAZA, BLOOMINGTON, IL, 61710

I REQUESTED AN ESTOPPEL --- --- TO CEASE AND DESIST FROM ALL THE ACTIVITY

addr.: STATE FARM

https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=s5WAeCnxmd/hcOI4eTnbig==

addr.: THE ZUCKER FAMILY & ITS COUNSELORS [ all of them ]

https://iapps.courts.state.ny.us/nyscef/ViewDocument?docIndex=Jf3Un/JaVXZwF7kvbaee4w==

NOTWITHSTANDING FOOLING AROUND WITH A 40' ACT MUTUAL FUND. IF YOU ARE LOOKING FOR AN EARLY RETIREMENT, IT DOESN'T HAVE TO BE THIS WAY.

PLUS, THE NAME HASN'T CHANGED THIS YEAR YET - AND IT DOESN'T HAVE TO....

TWENTY-FIVE MILLION US DOLLARS, UPFRONT

AND CONSIDER THE MATTER CLOSED, WITH THE IMPLIED "COVERAGE" THEREAFTER.

WITHOUT ANY LEGAL EXPOSURES, AND WITH MYSELF THERE TO ASSIST YOU PERSONALLY - UNTIL THE DISASTER YOUR ADVISORS HAVE LEFT YOU WITH IS CLEAR.

UPFRONT – THAT IS MY COST, UNLESS YOU HAVE ANOTHER PROFESSIONAL TO DEAL WITH YOUR COMPANY'S AFFAIRS LEGITIMATELY AFTER THE FACT... AND NOT THE KIND OF PROFESSIONAL THE ZUCKER WOULD HIRE TO WATCH ME IN A HOTEL ROOM AT 5AM - JUST LKE THE ATTACHED FILE.

A BETTER USE OF YOUR CREDIT LINE ANYWAYS, FOR ALL INTENSIVE PURPOSES.

TIME SENSITIVE, AS ALWAYS.

/S/ BO DINCER

[email protected] [email protected] TEL.: 646-256-3609

CARRIES OVER, TRUST ME... I AM NOT A ZUCKER, AND THEY CAN NOT AFFORD ME - AT ANY PRICE.

– YOUR $25 MILLION DOLLAR CREDIT LINE IS GOING TO BE ENOUGH TO COVER MY COSTS TO REPAIR THE FIRM. AND FOR FREE OF COURSE... BUT ONLY AFTER WE SETTLE.

https://www.scribd.com/document/386161673/Bo-Dincer-New-York-City-Fixed-Income-Trader-Baris-Dincer-Maritime-Capital

https://en.everybodywiki.com/Maritime_Capital_Partners_LP

#GOCARDS.

THANK FOR REACHING OUT, HAVE A GOOD NIGHT - AND SORRY I WAS WORKING HK HOURS AGAIN.

OUT OF COURT SETTLEMENT PRICE IN US DOLLARS $25,000,000.00

WITH THE IMPLIED "COVERAGE" OF PRE-EXISTING MATTERS SETTLED

#1 THANKS DAVID FOR YOUR EMAIL AGAIN MR. MOORE.

/S/ BO DINCER. you know what code to use, same as last 646-256-3609

#12-12.5858

##1


Monday 2022-05-23 12:06:45 by Andrew Hayworth

Split CI builds by gems at top-level (#1249)

  • fix: remove unneeded Appraisals for opentelemetry-registry

It's not actually doing anything, so we skip it.

  • ci: remove ci-without-services.yml

We're going to bring back these jobs in the next few commits, but we can delete it right now.

  • ci: remove toys/ci.rb

We're going to replicate this in Actions natively, so that we can get more comprehensible build output.

  • ci: replace toys.rb functionality with an explosion of actions + yaml

This replaces the "test it all in a loop" approach that toys/ci.rb was taking, by leveraging some more advanced features of GitHub Actions.

To start, we construct a custom Action (not a workflow!) that can run all the tests we were doing with toys/ci.rb. It takes a few different inputs: gem to test, ruby version to use, whether or not to do rubocop, etc. Then, it figures out where in the repo that gem lives, sets up ruby (including appraisals setup, if necessary), and runs rake tests (and then conditionally runs YARD, rubocop, etc).

Then, over in ci.yml, we list out all of the gems we currently have and chunk them up into different logical groups:

  • base (api, sdk, etc)
  • exporters
  • propagators
  • instrumentation that requires sidecar services to test
  • instrumentaiton that doesn't require anything special to test

For most groups, we set up a matrix build of operating systems (ubuntu, macos, and windows) - except for the "instrumentation_with_services" group, because sidecar services are only supported on linux.

For each matrix group (gem + os), we then have a build that has multiple steps - and each step calls the custom Action that we defined earlier, passing appropriate inputs. Each step tests a different ruby version: 3.1, 3.0, 2.7, or jruby - and we conditionally skip the step based on the operating system (we only run tests against ruby 3.1 for mac / windows, because the runners are slower and we can't launch as many at once).

Notably, we have a few matrix exclusions here: things that wont build on macos or windows, but there aren't many.

Finally, each group also maintains a "skiplist" of sorts for jruby - it's ugly, but some instrumentation just doesn't work for our Java friends. So we have a step that tests whether or not we should build the gem for jruby, and then the jruby step is skipped depending on the answer. We can't really use a matrix exclusion here because we don't use the ruby version in the matrix at all - otherwise we'd have a huge explosion of jobs to complete, when in reality we can actually install + test multiple ruby versions on a single runner, if we're careful.

The net effect of all of this is that we end up having many different builds running in parallel, and if a given gem fails we can easily see that and get right to the problem. Builds are slightly faster, too.

The major downsides are:

  • We need to add new gems to the build list when we create them.
  • We can't cache gems for appraisals, which adds a few minutes onto the build times (to be fair, we weren't caching anything before)
  • It's just kinda unwieldy.
  • I didn't improve anything around the actual release process yet.

Future improvements could be:

  • Figuring out how to cache things with Appraisals, because I gave up after a whole morning of fighting bundler.
  • Dynamically generating things again, because it's annoying to add gems to the build matrices.
  • feat: add scary warning to instrumentation_generator re: CI workflows

  • fix: remove testing change

  • ci: Add note about instrumentation_with_services


Monday 2022-05-23 12:58:37 by Odoo's Mergebot

[MERGE] im_livechat: introduce chatbot scripts

PURPOSE

This commit introduces a chatbot operator that works based on a user-defined script with various steps.

SPECS

A im_livechat.chatbot.script can be defined on a livechat rule. When a end-user reaches a website page that matches the rule, the chat window opens and the script of the bot starts iterating through its steps.

The chatbot code is currently directly integrated with the existing livechat Javascript code. It defines extra conditions and layout elements to be able to automate the conversation and register user answers.

AVAILABLE STEPS

A script is defined with several steps that can currently be one of the following types:

"text"

A simple text step where the bot posts a message without expecting an answer e.g: "Hello! I'm a friendly robot!"

"question_selection"

The bot will ask a question and suggest answers, the end-user will have to click on the answer he chooses e.g: "How can I help you? -> Create a Ticket -> Create a Lead -> Speak with a human"

"question_email"

That step will ask the end user's email address (and validate it) The result is saved on the linked im_livechat.im_livechatchatbot.mail.message

"question_phone"

Same logic as the 'question_email' for a phone number We don't validate the input this time as it's a complicated process (requires country, ...)

"forward_operator"

Special type of step that will add a human operator to the conversation when reached, which stops the script and allow the visitor to discuss with a real person.

The operator will be chosen among the available operators on the livechat.channel.

If there is no operator available, the script continues normally which allows to automate an "answering machine" that will redirect the user in case no operator is available.

e.g: "I'm sorry, no operator is available right now, please contact us by email at '[email protected]', we will try to respond as soon as possible!". (Or even something more complex with multiple questions / paths).

"free_input_single"

Will ask the visitor for a single line of text. This text is not saved anywhere else than in the conversation, but it's still useful when combined with steps that create leads / tickets since those print the whole conversation into the description.

"free_input_multi"

Same as "free_input_single" but lets the user input multiple lines of text. The frontend implementation is made by waiting a few seconds (currently 10) for either the next submitted message or the next character typed into the input.

This lets visitors explain their issue / question with multiple messages. Which is very useful since new messages are sent every time you press "Enter".

"create_lead"

Special step_type that allows creating a crm.lead when reaching it. Usually used in addition to 'question_email' and 'question_phone' to create interesting leads.

LINKS

Task-2030386

closes odoo/odoo#84000

Related: odoo/enterprise#24894 Signed-off-by: Thibault Delavallee (tde) [email protected] Co-authored-by: Patrick Hoste [email protected] Co-authored-by: Aurélien Warnon [email protected]


Monday 2022-05-23 13:07:18 by z3DD3r

msm_thermal: simplified thermal driver

Thermal driver by franco. This is a combination of 9 commits:

msm: thermal: add my simplified thermal driver. Stock thermal-engine-hh goes crazy too soon and too often and offers no way for userland to tweak its parameters

Signed-off-by: franciscofranco [email protected]

msm: thermal: moar magic

Added a sample time between heat levels. The hotter it is, the longer it should stay throttled in that same freq level therefore cooling this down effectively. Also due to this change freqs have been slightly adjusted. Now the driver will start a bit earlier on boot. Few cosmetic changes too because why the fuck not.

Signed-off-by: Francisco Franco [email protected]

msm: thermal: reduce throttle point to 60C

Signed-off-by: Francisco Franco [email protected]

msm: thermal: rework previous patches

The changes on previous patches didn't really work out. Either the device just reboots for infinity during boot because if it reaches high temperatures and fails to throttle down fast enough to mitigate it, or it usually just crashes the device while benchmarking on Geekbench or Antutu again because when there's a big ass temp spike this doesn't mitigate fast enough.

These changes are confirmed working after testing in all scenarios previously described.

Signed-off-by: Francisco Franco [email protected]

msm: thermal: work faster with more thrust

Last commit was not enough, it mitigated most of the issues, but some users were still having weird shits because temperature wasn't going down as fast as it should. So now queue it every fucking 100ms in a dedicated high prio workqueue. It's my last stance!

Signed-off-by: Francisco Franco [email protected]

msm: thermal: offline cpu2 and cpu3 if things get REALLY rough

Just for safe measure put cpu2 and cpu3 to sleep if the heat gets way bad. Also the polling time gets back to default stock 250ms since the earlier 100ms change was just a band-aid for a nastier bug that got fixed in the mean time.

Signed-off-by: Francisco Franco [email protected]

msm_thermal: send OFF/ONLINE uevent in hotplug cases

Send the correct uevent after setting a CPU core online or offline. This allows ueventd to set correct SELinux labels for newly created sysfs CPU device nodes.

Bug: 28887345 Change-Id: If31b8529b31de9544914e27514aca571039abb60 Signed-off-by: Siqi Lin [email protected] Signed-off-by: Thierry Strudel [email protected] [Francisco: slightly adapted from Qcom's original patch to apply] Signed-off-by: Francisco Franco [email protected]

Revert "msm_thermal: send OFF/ONLINE uevent in hotplug cases"

Crashes everything if during early early boot the device is hot and starts to throttle. It's madness!

This reverts commit 80e38963f8080c3c9d26374693dd0f0a88f8060b.

msm: thermal: return to original simplified driver

Some users still had a weird issue that I was unable to reproduce which either consisted in cpu2 and cpu3 getting stuck in offline mode or after a gaming session while charging the device would crash with "hw_reset" and then the device would loop in bootloader -> boot animation forever until the device cooled down.

My test was leaving the device charging during the night, brightness close to max and running Stability Test app with the CPU+GPU suite. I woke up and the device was still running it flawlessly. Rebooted while hot and it booted just fine.

Since I was unable to reproduce the issue and @osm0sis flashed back to <r92 and was unable to reproduce it anymore here we go back to that stage.

Only change I made compared to that original driver was simply queue things into a dedicated high prio wq for faster thermal mitigation. Rest is unchanged.

Signed-off-by: Francisco Franco [email protected]


Monday 2022-05-23 13:24:52 by Paul Berberian

fix unknown active Period happening when switching rapidly between Representation

One of the application using the RxPlayer at Canal+ experienced recently a new strange bug: Some methods, like getAvailableVideoBitrates, would always return an empty array, even when it is evident that there are multiple video bitrates available.

After some quick checks, it turned out that multiple RxPlayer API were in the same case and that this was due to the RxPlayer API module not knowing which Period (subpart of the current content with its own tracks and bitrates characteristics) was being played.


The API module relies on the Stream module's ActivePeriodEmitter to know which Period is being played.

This last part (the ActivePeriodEmitter) knows which Period is the "active Period" by exploiting a side-effect of the current Stream behavior:

The "active Period" is the first chronological Period for which each type of buffer ("audio", "video", "text") has a corresponding active RepresentationStream. Why this weird rule is used (instead of simpler solutions like relying on the current position) is out of the scope of this message.

Anyway, turns out there was a pretty big bug in that ActivePeriodEmitter: If multiple RepresentationStream were created for a single buffer type (audio, video, text) before other buffer types had their first one for a given Period, it would be possible to never be able to emit this Period as "active".

The source of the error seems to be due to a very evident logical error. What was written as:

if (A && B) {
  // Do thing
} else {
  // Do other thing
}

Should have been written:

if (A && B) {
  // Do thing
} else if (!B) {
  // Do other thing
}

or more succintly (and simply):

if (!A) {
  // Do other thing
} else if (B) {
  // Do thing
}

I like to talk about this type of error as "logical typos" because it makes no sense when you read it, yet was most likely written with well-thought logic in mind, it's just that the execution was poorly done.


Now the biggest question is why are we seeing this more than 2-years-old bug only now and not before?

I think it may be because we've been lucky (though I prefer to consider us to be unlucky here, I generally prefer immediately-catched errors):

  1. Most contents have only one Period, and in those we usually will create synchronously a single RepresentationStream per type at the beginning. In this case, no error happens.

  2. Even for multi-Period contents, chances are that text and audio RepresentationStream, which generally are much less heavy and thus are pre-loaded faster, will be created before the video one, and we very rarely switch between audio or text Representations.

    Thus we rarely switch between audio or text RepresentationStream before the first video RepresentationStream is anounced and thus don't see any bug.

There might be other causes. I'm very surprised that we never either catched this bug or seen some weird related behavior on multi-Period contents due to this bug.


Monday 2022-05-23 13:38:39 by Paul Berberian

HTMLMediaElement's related error have now the initial message if it exists

I was very shamefully not aware that MediaErrors as emitted by HTMLMediaElement could have a message property describing the actual problem.

For my defense, this was not always the case for MediaErrors (I found some w3c and chrome links to prove it!). Yet it apparently is since 2017, so my defense is still pretty weak.

Relying on those could definitely have saved us many hours of debugging over the years, where we were trying to find which segment of which type provoked a MEDIA_ERR_DECODE and why.

Anyway, I prefer not to think to much about it, here it is, and now it's available: the corresponding error message will actually be the message of the corresponding RxPlayer's MediaError's message (yes, both the native browser error and the RxPlayer supplementary layer have the exact same name, and no, it cannot be a source of confusion at all, why would you say that?).


Monday 2022-05-23 15:21:17 by Nik Everett

TSDB: Support GET and DELETE and doc versioning (#82633)

This adds support for GET and DELETE and the ids query and Elasticsearch's standard document versioning to TSDB. So you can do things like:

POST /tsdb_idx/_doc?filter_path=_id
{
  "@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2
}

That'll return {"_id" : "BsYQJjqS3TnsUlF3aDKnB34BAAA"} which you can turn around and fetch with

GET /tsdb_idx/_doc/BsYQJjqS3TnsUlF3aDKnB34BAAA

just like any other document in any other index. You can delete it too! Or fetch it.

The ID comes from the dimensions and the @timestamp. So you can overwrite the document:

POST /tsdb_idx/_bulk
{"index": {}}
{"@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2}

Or you can write only if it doesn't already exist:

POST /tsdb_idx/_bulk
{"create": {}}
{"@timestamp": "2021-12-29T19:25:05Z", "uid": "adsfadf", "v": 1.2}

This works by generating an id from the dimensions and the @timestamp when parsing the document. The id looks like:

  • 4 bytes of hash from the routing calculated from routing_path fields
  • 8 bytes of hash from the dimensions
  • 8 bytes of timestamp All that's base 64 encoded so that Uid can chew on it fairly efficiently.

When it comes time to fetch or delete documents we base 64 decode the id and grab the routing from the first four bytes. We use that hash to pick the shard. Then we use the entire ID to perform the fetch or delete.

We don't implement update actions because we haven't written the infrastructure to make sure the dimensions don't change. It's possible to do, but feels like more than we need now.

There ton of compromises with this. The long term sad thing is that it locks us into indexing the id of the sample. It'll index fairly efficiently because the each time series will have the same first eight bytes. It's also possible we'd share many of the first few bytes in the timestamp as well. In our tsdb rally track this costs 8.75 bytes per document. It's substantial, but not overwhelming.

In the short term there are lots of problems that I'd like to save for a follow up change:

  1. We still generate the automatic _id for the document but we don't use it. We should stop generating it. Included in this PR based on review comments.
  2. We generated the time series _id on each shard and when replaying the translog. It'd be the good kind of paranoid to generate it once on the primary and then keep it forever.
  3. We have to encode the _id as a string to pass it around Elasticsearch internally. And Elasticsearch assumes that when an id is loaded we always store as bytes encoded the Uid - which does have nice encoding for base 64 bytes. But this whole thing requires us to make the bytes, base 64 encode them, and then hand them back to Uid to base 64 decode them into bytes. It's a bit hacky. And, it's a small thing, but if the first byte of the routing hash encodes to 254 or 255 we Uid spends an extra byte to encode it. One that'll always be a common prefix for tsdb indices, but still, it hurts my heart. It's just hard to fix.
  4. We store the _id in Lucene stored fields for tsdb indices. Now that we're building it from the dimensions and the @timestamp we really don't need to store it. We could recalculate it when fetching documents. In the tsdb rall ytrick this'd save us 6 bytes per document at the cost of marginally slower fetches. Which is fine.
  5. There are several error messages that try to use _id right now during parsing but the _id isn't available until after the parsing is complete. And, if parsing fails, it may not be possible to know the id at all. All of these error messages will have to change, at least in tsdb mode.
  6. If you specify an _id on the request right now we just overwrite it. We should send you an error. Included in this PR after review comments.
  7. We have to entirely disable the append-only optimization that allows Elasticsearch to skip looking up the ids in lucene. This halves indexing speed. It's substantial. We have to claw that optimization back somehow. Something like sliding bloom filters or relying on the increasing timestamps.
  8. We parse the source from json when building the routing hash when parsing fields. We should just build it from to parsed field values. It looks like that'd improve indexing speed by about 20%.
  9. Right now we write the @timestamp little endian. This is likely bad the prefix encoded inverted index. It'll prefer big endian. Might shrink it.
  10. Improve error message on version conflict to include tsid and timestamp.
  11. Improve error message when modifying dimensions or timestamp in update_by_query
  12. Make it possible to modify dimension or timestamp in reindex.
  13. Test TSDB's _id in RecoverySourceHandlerTests.java and EngineTests.java.

I've had to make some changes as part of this that don't feel super expected. The biggest one is changing Engine.Result to include the id. When the id comes from the dimensions it is calculated by the document parsing infrastructure which is happens in IndexShard#pepareIndex. Which returns an Engine.IndexResult. To make everything clean I made it so id is available on all Engine.Results and I made all of the "outer results classes" read from Engine.Results#id. I'm not excited by it. But it works and it's what we're going with.

I've opted to create two subclasses of IdFieldMapper, one for standard indices and one for tsdb indices. This feels like the right way to introduce the distinction, especially if we don't want tsdb to cary around it's old fielddata support. Honestly if we need to aggregate on _id in tsdb mode we have doc values for the tsdb and the @timestamp - we could build doc values for _id on the fly. But I'm not expecting folks will need to do this. Also! I'd like to stop storing tsdb'd _id field (see number 4 above) and the new subclass feels like a good place to put that too.


Monday 2022-05-23 15:33:27 by Kylerace

Fixes Massive Radio Overtime, Implements a Spatial Grid System for Faster Searching Over Areas (#61422)

a month or two ago i realized that on master the reason why get_hearers_in_view() overtimes so much (ie one of our highest overtiming procs at highpop) is because when you transmit a radio signal over the common channel, it can take ~20 MILLISECONDS, which isnt good when 1. player verbs and commands usually execute after SendMaps processes for that tick, meaning they can execute AFTER the tick was supposed to start if master is overloaded and theres a lot of maptick 2. each of our server ticks are only 50 ms, so i started on optimizing this.

the main optimization was SSspatial_grid which allows searching through 15x15 spatial_grid_cell datums (one set for each z level) far faster than iterating over movables in view() to look for what you want. now all hearing sensitive movables in the 5x5 areas associated with each spatial_grid_cell datum are stored in the datum (so are client mobs). when you search for one of the stored "types" (hearable or client mob) in a radius around a center, it just needs to

iterate over the cell datums in range
add the content type you want from the datums to a list
subtract contents that arent in range, then contents not in line of sight
return the list

from benchmarks, this makes short range searches like what is used with radio code (it goes over every radio connected to a radio channel that can hear the signal then calls get_hearers_in_view() to search in the radios canhear_range which is at most 3) about 3-10 times faster depending on workload. the line of sight algorithm scales well with range but not very well if it has to check LOS to > 100 objects, which seems incredibly rare for this workload, the largest range any radio in the game searches through is only 3 tiles

the second optimization is to enforce complex setter vars for radios that removes them from the global radio list if they couldnt actually receive any radio transmissions from a given frequency in the first place.

the third optimization i did was massively reduce the number of hearables on the station by making hologram projectors not hear if dont have an active call/anything that would make them need hearing. so one of hte most common non player hearables that require view iteration to find is crossed out.

also implements a variation of an idea oranges had on how to speed up get_hearers_in_view() now that ive realized that view() cant be replicated by a raycasting algorithm. it distributes pregenerated abstract /mob/oranges_ear instances to all hearables in range such that theres at max one per turf and then iterates through only those mobs to take advantage of type-specific view() optimizations and just adds up the references in each one to create the list of hearing atoms, then puts the oranges_ear mobs back into nullspace. this is about 2x as fast as the get_hearers_in_view() on master

holy FUCK its fast. like really fucking fast. the only costly part of the radio transmission pipeline i dont touch is mob/living/Hear() which takes ~100 microseconds on live but searching through every radio in the world with get_hearers_in_radio_ranges() -> get_hearers_in_view() is much faster, as well as the filtering radios step

the spatial grid searching proc is about 36 microseconds/call at 10 range and 16 microseconds at 3 range in the captains office (relatively many hearables in view), the new get_hearers_in_view() was 4.16 times faster than get_hearers_in_view_old() at 10 range and 4.59 times faster at 3 range

SSspatial_grid could be used for a lot more things other than just radio and say code, i just didnt implement it. for example since the cells are datums you could get all cells in a radius then register for new objects entering them then activate when a player enters your radius. this is something that would require either very expensive view() calls or iterating over every player in the global list and calling get_dist() on them which isnt that expensive but is still worse than it needs to be

on normal get_hearers_in_view cost the new version that uses /mob/oranges_ear instances is about 2x faster than the old version, especially since the number of hearing sensitive movables has been brought down dramatically.

with get_hearers_in_view_oranges_ear() being the benchmark proc that implements this system and get_hearers_in_view() being a slightly optimized version of the version we have on master, get_hearers_in_view_as() being a more optimized version of the one we have on master, and get_hearers_in_LOS() being the raycasting version currently only used for radios because it cant replicate view()'s behavior perfectly.

(cherry picked from commit d005d76f0bd201060b6ee515678a4b6950d9f0eb)

Conflicts:

.github/CODEOWNERS

code/game/objects/items/devices/radio/radio.dm


Monday 2022-05-23 16:37:24 by Nicholas Feinberg

Make Hell Knights evil again (catern)

Lost this when they lost Pain.

Slightly hacky.


Monday 2022-05-23 16:51:02 by Mike Griese

Manually copy trailing attributes on a resize (#12637)

THE WHITE WHALE

This is a fairly naive fix for this bug. It's not terribly performant, but neither is resize in the first place.

When the buffer gets resized, typically we only copy the text up to the MeasureRight point, the last printable char in the row. Then we'd just use the last char's attributes to fill the remainder of the row.

Instead, this PR changes how reflow behaves when it gets to the end of the row. After we finish copying text, then manually walk through the attributes at the end of the row, and copy them over. This ensures that cells that just have a colored space in them get copied into the new buffer as well, and we don't just blat the last character's attributes into the rest of the row. We'll do a similar thing once we get to the last printable char in the buffer, copying the remaining attributes.

This could DEFINITELY be more performant. I think this current implementation walks the attrs on every cell, then appends the new attrs to the new ATTR_ROW. That could be optimized by just using the actual iterator. The copy after the last printable char bit is also especially bad in this regard. That could likely be a blind copy - I just wanted to get this into the world.

Finally, we now copy the final attributes to the correct buffer: the new one. We used to copy them to the old buffer, which we were about to destroy.

Validation

I'll add more gifs in the morning, not enough time to finish spinning a release Terminal build with this tonight.

Closes #32 🎉🎉🎉🎉🎉🎉🎉🎉🎉 Closes #12567


Monday 2022-05-23 17:03:38 by Noah Nuebling

Refactoring and added documentation.

Ideas on what to still improve about the backend:

  • When actually using the Click and Drag to Scroll (I just worked on sth in Sketch and used it) it still crashes ALL the time. → Absolutely need to fix this before shipping
  • Mouse pointer hiding (used for Click and Drag to scrolls pointer freezing) doesn't work properly in some scenarios
    • In Steam if the hidden pointer is over certain elements that make the cursor into a pointing hand cursor, that unhides it.
    • Over the Dock it doesn't work
    • In Mission Control it doesn't work properly either
    • Basically, if something changes the pointer, then that unhides it. And for Misson Control I think the problem is that the Canvas Window will not be displayed as an overlay but be hidden, so we can't draw the fake mouse pointer, and hovering over the windows in Mission Control reveals the real pointer.
  • Certain triggers are still recognized in AddMode but don't work anymore (I think an example is Button combinations, like "Click Button 5 + Click Button 4")
  • Think about also changing pxPerTickStart (not only pxPerTickEnd) dynamically based on screen canvas size and inertia level as well
    • Fix Bug where sometimes cursor disappears after using Drag Scroll - mostly when the computer is slow → Make sure the cursor stuff is EXTREMELY robust. Just crash the app if anything only comes close to going wrong.
  • Final polish for interaction between different modifications (e.g. Startin g to drag while you're clicking-and-scrolling)
  • Improve gesture scroll simulation
    • tune up the gesture values so that swiping between pages, marking as unread, etc are easier to trigger → Edit: Did this. Is nicer now
    • Make it work with Messages.app side swiping
    • In Mail, the side swiping sensitivity weirdly changed during the swipe making it feel weird
    • Sometimes when the computer is slow or sth it breaks and then you can't to the sideswipes in any app. (Not sure if this also happens with the Trackpad)
  • Make fast scroll work better
    • scrollSwipeThreshold being compared to time between last tick of the previous swipe and first tick of the current swipe doesn't really make sense I think? Not sure.
    • Not sure if this whole business of a free scrolling wheel having lots of consecutive ticks in one swipe adding to the number of swipes makes sense. It's really weird and sort of confusing.
    • I can't even trigger fast scroll right now with my M720. Only when quickScroll is active.

Ideas what to improve in the Frontend (only the parts which we'll copy over to the new UI of course - so mostly the RemapTable)

  • the ButtonGroupRows in the RemapTable are fucked up pre Big Sur
  • It should display "Click Button 4 + Click and Drag Button 5", but it only displays "Click Button 4 + Click and Drag"
  • Should maybe change "Double Click Button 5 + Hold Button 4" to "Double Click and Hold Button 5, then Hold Button 4" - at least for the tooltip? Not sure.

Most important next step:

  • Build the scroll settings UI and connect it up with the backend

Monday 2022-05-23 17:43:39 by IDF31

fuck you yard and your microoptimisation benchmarks


Monday 2022-05-23 17:46:02 by Chris Down

Do not allow focus to drift from fullscreen client via focusstack()

It generally doesn't make much sense to allow focusstack() to navigate away from the selected fullscreen client, as you can't even see which client you're selecting behind it.

I have had this up for a while on the wiki as a separate patch[0], but it seems reasonable to avoid this behaviour in dwm mainline, since I'm struggling to think of any reason to navigate away from a fullscreen client other than a mistake.

0: https://dwm.suckless.org/patches/alwaysfullscreen/


Monday 2022-05-23 18:10:29 by Mark Shields

** Collage v2 sketch ***

  • Host glitch if PlanDevices run before CollagePartition
  • Fix unit test
  • Make load_static_library first class python func
  • Get CUTLASS going on graph executor as well as vm
  • Include export_library in estimate_seconds
  • Rollback DSOLibrary changes.
  • Add StaticLibraryNode and switch CUTLASS to use it This avoids the crazy serialize/deserialize/load hackery, which I'll now remove.
  • Get running again
  • CUTLASS picks up all options from 'cutlass' external codegen target.
  • Revert false starts with cutlass handling
  • Get CUTLASS going with program-at-a-time tuning and compilation instead of function at a time.
  • Save DSOLibraries by contents rather than by reference.
  • futzing with libraries
  • revert unnecessary cutlass changes
  • starting unit test for dsolibrary save
  • Prepare scalar changes for PR.
  • Eager candidate cost measurement.
  • More conv2d_cudnn.cuda training records.
  • cleanup before rebase
  • Use 'regular' target when build, not external codegen target
  • Tuned for -libs=cudnn
  • Tune before collage not during
  • Bring over target changes
  • Fix GetSpecName
  • Try again on python target changes, this time leave check_and_update_host_consist unchanged
  • Revert python target changes to try again less agressively
  • Few other cleanups
  • Switch to 'external codegen targets' style
  • Woops, run just_tvm after collage to pick up tuning logs
  • Finish tuning for rtx3070
  • Run them all!
  • Update tuning logs
  • Share global vars in the candidate function cache
  • Finished tuning mobilenet, started on resnet50.
  • Include model name in logs to make sure we don't get anything mixed up
  • Drop -arch=sm_80
  • Fix MaxCoalesce
  • Attach external_symbol to lifted functions
  • Add missing node registration, but leave VisitAttrs empty for now
  • Make MaxCoalesce as aggressive as possible, since simple impl did not handle sharing.
  • Finish tuning resnext50
  • Improve coelescing
  • Account for coelesced functions when outlining final module
  • Fix caching, for real this time.
  • More nn.conv2d autotvm tuning records, but still not done with resnext50_32_4d.
  • OutlineExternalFunction both when preparing to estimate cost and after optimal partitioning applied.
  • Use fp16 in TensorRT only if model's 'main_dtype' is float16.
  • Fix CostEstimator caching issue
  • More Target cleanup (while waiting for tuning runs)
  • Better logging of candidates
  • Support export to ONNX
  • Fix merge
  • Part-way through tuning for mobilenet.
  • Add resnext50_32x4d
  • Lift all "Compiler" functions before estimating to ensure no Relay passes are run on them
  • Still trying
  • Trying to track down weird failure in conv2d compute.
  • Switch tensorrt to be fully pattern & composite function based
  • Combiner rule for tuple projection
  • Allow build to fail in estimate_seconds
  • Add mobilenetv2 and resnet50v2 to menagerie
  • Update CompilationConfig to handle target refinement
  • Nuke remaining uses of TargetMap in favor of CompilationConfig (still needs to be pushed into python side)
  • Save/Load dso libraries (needed for Cutlass with separated run)
  • Move models into separate file
  • gpt2_extract_16 and autotvm tuning log
  • Handle missing tuning log files
  • fp16 support in scalars and the tensorrt runtime.
  • Wrap runner in nsys nvprof if requested
  • Enforce strict compile/run time separation in preparation for profiling
  • Better logging of final optimal partitioning and state of all candidates
  • Fix handling of tuples and InlineComposites fixup pass.
  • Fix TensorRT pattern bugs
  • Pass max_max_depth via PassContext
  • Better logging so can quickly compare specs
  • BUG: Benchmark the partitioned rather than original model!!!
  • Use median instead of mean
  • Back to GPT2
  • Make sure all function vars have a type
  • Don't extract tasks if estimating BYOC-only (Was double-tuning every cutlass kernel).
  • Make sure cudnn pattern table is registered
  • Enable cudnn, get rid of support for op-predicate based BYOC integrations
  • Enable cublas
  • And yet another go at pruning unnecessary candidates.
  • Another go at pruning unnecessary candidates
  • Fix CompositePartitionRule use
  • Fix a few bugs with new TensorRT pattern-based integration
  • Rework RemoveSubCandidatesCombinerRule for soundness
  • Better logging
  • Bug fixes
  • Implement critical nodes idea for avoiding obviously unnecessary candidates
  • Promote DataflowGraph from alias to class so can cache downstream index set
  • Quick check to avoid unioning candidates which would create a cycle
  • Host out CandidatePartitionIndex and add rules to avoid small candidates subsumed by containing candidates
  • GetFunction can legitimately return nullptr
  • rename tuning log
  • Support for int64 literals
  • Switch GPT2 to plain model
  • Fix library cloberring issue for cutlass
  • actually checkin 'built in' tuning log (covers mnist & gpt2 only)
  • trying to debug gpt2
  • Update TargetKind attribute name
  • working through gpt2 issues
  • checkin tuning records for MNIST (with hack to not retry failed winograd)
  • Autotvm tuning disabled if log file empty (default)
  • Autotvm tuning during search working
  • tune during search (but does not load tuned records after search!)
  • About to add tuning to estimate_seconds
  • Split out the combiner rules & make them FFI friendly
  • Rework comments
  • Estimate IRModule instead of Function (closer to meta_schedule iface)
  • Add 'host' as first-class partitioning spec (Avoids special casing for the 'leave behind for the VM' case)
  • Move CollagePartitioner to very start of VM compiler flow (not changing legacy)
  • Fix bugs etc with new SubGraph::Rewrite approach Ready for updating RFC to focus on partitioning instead of fusion.
  • Working again after partition<->fusion split.
  • Add PrimitivePartitionRule
  • Refactor SubGraph Extract/Rewrite
  • Rename kernel->partition, fusion->partition
  • Next: make nesting in "Primitive" an explicit transform
  • respect existing target constraints from device planner
  • make 'compiler' and 'fusion_rule' attributes avail on all target kinds
  • moved design to tvm-rfcs, apache/tvm-rfcs#62
  • incorporate comments
  • avoid repeated fusion
  • fix trt type checking
  • better logs
  • pretty print primitive rules
  • fix tensorrt
  • multiple targets per spec
  • don't extract candidate function until need cost Need to bring CombineByPrimitives back under control since lost depth limit.
  • cleaned up fusion rule names
  • added 'fuse anything touching' for BYOC
  • Finish dd example
  • Add notion of 'MustLower', even if a candidate fires may still need to consider leaving node behind for VM (especially for constants).
  • starting example
  • finished all the dd sections
  • documentation checkpoint
  • docs checkpoint
  • more design
  • starting on dd
  • runs MNIST with TVM+CUTLASS+TRT
  • cutlass function-at-a-time build
  • need to account for build_cutlass_kernels_vm
  • move cutlass tuning into relay.ext.cutlass path to avoid special case
  • add utils
  • don't fuse non-scalar constants for tvm target.
  • stuck on cuda mem failure on conv2d, suspect bug in main
  • where do the cutlass attrs come from?
  • running, roughtly
  • pretty printing, signs of life
  • wire things up again
  • Switch SubGraph and CandidateKernel to TVM objects
  • naive CombineByKindFusionRule, just to see what we're up agaist Will switch to Object/ObjectRef for SubGraph and CandidateKernel to avoid excess copying.
  • preparing to mimic FuseOps
  • rework SubGraph to use IndexSet
  • rough cut at MaximalFusion
  • split SubGraph and IndexSet in preparation for caching input/output/entry/exit sets in SubGraph.
  • top-down iterative handling of sub-sub-graphs
  • about to give up on one-pass extraction with 'sub-sub-graphs'
  • Add notion of 'labels' to sub-graphs
  • Rework FusionRules to be more compositional
  • partway through reworking fusion rules, broken
  • SubGraph::IsValid, but still need to add no_taps check
  • dataflow rework, preparing for SubGraph::IsValid
  • explode into subdir
  • mnist with one fusion rule (which fires twice) working
  • switch to CandidateKernelIndex
  • Confirm can measure 'pre-annotated' primitive functions
  • checkpoint
  • stuff
  • more sketching
  • dominator logging

Monday 2022-05-23 18:57:06 by victorli2002

Send reviews to Firebase

Was able to send reviews/comments to firebase, but this shit like doesn't work half of the time because of some localhost bullshit. "Something was already running on PORT 3000" like bitch gtfo.

Honestly restarting react was like the only fix that I found but it didnt work all the time. Maybe my wifi is bad?


Monday 2022-05-23 18:58:20 by petrero

34.2.Dynamic Disable an Action & AdminContext

So now the trick is to use this information (and there's a lot of it) to modify this config and disable the DELETE action in the right situation. Back over in our listener, the first thing we need to do is get that AdminContext. Set a variable and do an if statement all at once: if (! $adminContext = $event->getAdminContext()), then return.

I'm coding defensively. It's probably not necessary... but technically the getAdminContext() method might not return an AdminContext. I'm not even sure if that's possible, but better safe than sorry. Now get the CrudDto the same way: if (! $crudDto = ->getCrud()), then also return. Once again, this is theoretically possible... but not going to happen (as far as I know) in any real situation.

Next, remember that we only want to perform our change when we're dealing with the Question class. The CrudDto has a way for us to check which entity we're dealing with. Say if ($crudDto->getEntityFqcn() !== Question::class), then return.

So... this is relatively straightforward, but, to be honest, it took me some digging to find just the right way to get this info.

Disabling the Action

  • Now we can get to the core of things. The first thing we want to do is disable the delete action entirely if a question is approved. We can get the entity instance by saying $question = $adminContext->getEntity()->getInstance(). The getEntity() gives us an EntityDto object... and then you can get the instance from that.

Below, we're going to do something a little weird at first. Say if ($question instanceof Question) (I'll explain why I'm doing that in a second) && $question->getIsApproved(), then disable the action by saying $crudDto->getActionsConfig() - which gives us an ActionsDto object - then ->disableActions() with [Action::DELETE].

There are a few things I want to explain. The first is that this event is going to be called at the beginning of every CRUD page. If you're on a CRUD page like EDIT, DELETE, or DETAIL, then $question is going to be a Question instance. But, if you're on the index page... that page does not operate on a single entity. In that case, $question will be null. By checking for $question being an instanceof Question, we're basically checking to make sure that Question isn't null. It also helps my editor know, over here, that I can call the ->getIsApproved() method.

The other thing I want to mention is that, at this point, when you're working with EasyAdmin, you're working with a lot of DTO objects. We talked about these earlier. Inside of our controller, we deal with these nice objects like Actions or Filters. But behind the scenes, these are just helper objects that ultimately configure DTO objects. So in the case of Actions, internally, it's really configuring an ActionConfigDto. Any time we call a method on Actions... it's actually... if I jump around... making changes to the DTO.

And if we looked down here on the Filters class, we'd see the same thing. So by the time you get to this part of EasyAdmin, you're dealing with those DTO objects. They hold all of the same data as the objects we're used to working with, but with different methods for interacting with them. In this case, if you dig a bit, getActionsConfig() gives you that ActionConfigDto object... and it has a method on it called ->disabledActions(). I'll put a comment above this that says:

// disable action entirely for delete, detail & edit pages

Yup, if we're on the detail, edit, or delete pages, then we're going to have a Question instance... and we can disable the DELETE action entirely.

But this isn't going to disable the links on the index page. Watch: if we refresh that page... all of these are approved, so I should not be able to delete them. If I click "Delete" on ID 19... yay! It does prevent us:

You don't have enough permissions to run the "delete" action [...] or the "delete" action has been disabled.

That's thanks to us disabling it right here. And also, if we go to the detail page, you'll notice that the "Delete" action is gone. But if we click a Question down here, like ID 24 that is not approved, it does have a "Delete" button.


Monday 2022-05-23 18:58:20 by petrero

33.5. Conditionally Disabling an Action

Boo Ryan. I always do that. Inside update(), you need to return the action. There we go, much better!

And now... if we check the menu... look! The "Delete" action is gone! But if you go down to ID 24 - which is not approved - it's there! That's awesome!

Forbidding Deletes Dynamically

  • But, this isn't quite good enough. We're hiding the link on this one page only. And so, we should repeat this for the DELETE action on the detail page. And... you may need to disable the delete batch action entirely.

But even that wouldn't be enough... because if an admin somehow got the "Delete" URL for an approved question, the delete action would still work. The action itself isn't secure.

To give us that extra layer of security, right before an entity is deleted, let's check to see if it's approved. And if it is, we'll throw an exception.

To test this, temporarily comment-out this logic and return true... so that the delete link always shows. Back to the Questions page... got it!

Now go to the bottom of QuestionCrudController. Earlier we overrode updateEntity(). This time we're going to override deleteEntity()... which will allow us to call code right before an entity is deleted. To help my editor, I'll document that the entity is going to be an instance of Question.

Now, if ($entityInstance->getIsApproved()), throw a new \Exception('Deleting approved questions is forbidden'). This is going to look like a 500 Error to the user... so we could also throw an "access denied exception". Either way, this isn't a situation that anyone should have... unless we have a bug in our code or a user is trying to do something they shouldn't. Bad admin user!

I won't try this, but I'm pretty sure it would work. However, this is all a bit tricky! You need to secure the actual action... and also make sure that you remember to hide all the links to this action with the correct logic.

Life would be a lot easier if we could, instead, truly disable the DELETE action conditionally, on an entity-by-entity basis. If we could do that, EasyAdmin would hide or show the "Delete" links automatically... and even handle securing the action if someone guessed the URL.

Is that possible? Yes! We're going to need an event listener and some EasyAdmin internals. That's next.


Monday 2022-05-23 19:21:11 by Manuel Strobel

this shit really works, fuck yeah! Montag: 21:20 Uhr


Monday 2022-05-23 19:21:17 by aloe

haha what if we fundamentally didn't understand inheritance wouldn't that be fucking hilarious


Monday 2022-05-23 19:35:02 by GrafKacper

Updated tank localisation

Fuck you github fuck you fuck you fuck you.


Monday 2022-05-23 19:46:12 by Marko Grdinić

"1:05pm. Done with breakfast. Let me resume. I haven't done much in the morning so I am eager to start. Let me start things off, but putting down the really primitive items.

1:10pm. Focus me. Let me block the items out.

1:25pm. I think I understand the way rotation works now. With axis revolve the first two points set the axis. The start point of the rotation then gets projected onto the plane intersecting the first point.

1:30pm. Focus me. Turn off the Hyrule Castle theme. It is time to start blocking things out.

I can do this thing.

1:55pm. Putting organic objects on top of each other in Moi is quite difficult. Also I've realized I made a mistake getting rid of the curves for the fax cables. As long as they are still there, I can use them to move the cable in fact.

But it is not a big deal. I'll manage it.

2:35pm. 187 objects in the shelves file. Am I tired? Yes, I am.

I've put way more effort than I should into this as it is.

By all means I could do further detailing, but it does not matter.

2:35pm. If I wanted to do something like a manga, this way of working is definitely not the way to go. 3d has huge advantages in parallelism. You can have a dedicated team for making props, for texturing, for lighting, for layouting and so on.

You can't do that with painting. Having multiple people work on a single painting would just make a mess.

But the parallelism advantages of 3d as they are don't really matter to me. I don't have a team, and I can't replicate myself to assign work to my forks.

I need speed.

If I continue on this course, I have every confidence that I could become a decent 3d artist, but so what? What matters if I can produce decent illustrations for Heaven's Key and right now I can't. It needs to be fast, much faster.

2:40pm. I really should put some things in the cup and hollow it out, but never mind that for now. What I will do however is put some stuff on the bed. Not the creases, but some randomly sized cylinders. Maybe I'll use geo nodes to scatter them. Or I'll try the scatter addon. Or I can do it by hand, it does not matter.

I think from a modeling perspective, the scene is 2/3rds done. Apart from the detailing for the items on top, the rest I have to do via sculpting. Let me take a short break here.

3pm. Let me resume. As the last thing I'll just hollow out the cup so it is not a bare cylinder.

3:10pm. 192 objects. Let me take a screenshot. I'll post it in the wip thread. I've decided to at least put in a handle for the cup, it is much better like this.

///

I think this is good enough for now. Modeling wise, the room is 70% complete. I'd want to do more detailing on the objects in this image, and after that go into sculpting mode for stuff like blankets, curtains and bed creases as well as various clothing articles, but I'll leave it like this for now. What I will do is scatter a bunch of cylinders on the bed on the Blender side, and then start drawing practice. I think at the path I am on currently, there is nothing stopping me from becoming a good 3d artist, but I know enough that 3d on its own will never give me the speed that I want. I want good results as fast as possible and that makes developing 2d skills unavoidable. If I can't develop them I'll abandon this path and get a programming job. I need to tackle my fear from my school days and prove that my lack of art talent is just a lack of interest and poor instruction from the teacher.

///

3:30pm. Let me do it. I'll start by importing this shelf in Blender.

https://youtu.be/6LMuT2hN2yw Distribute Objects using Weight Paint-Geometry Nodes (Blender Tutorial)

I need a refresher on how to scatter stuff in Blender. Let me watch this.

4:15pm. It was easy enough to do though I needed a refresher. Right now I am just trying to get mentally ready for what is to come.

4:35pm. Let me start.

4:40pm. Hmmm, let me bring in a HDRI. This drab scene is not good. If I am going to get to the bottom of what is bothering me about the creases, I need some actual light. Let me play a bit with rendering.

4:50pm. I find the way Eevee computes transmission really confusing. Let me switch to Cycles.

5:30pm. https://youtu.be/cPDdjTh0EYM Easy Interior Lighting in Blender (Tutorial)

Yeah, now I am just fooling around. The interior is too dark, and it takes so long to render it is crazy. To get a clean image I'd need to wait almost a full hour, and that is without the balcony.

https://youtu.be/cPDdjTh0EYM?t=47

Hmmm, I should try the sky texture.

https://youtu.be/cPDdjTh0EYM?t=74

Never heard of Sun Position addon.

5:50pm. I hate these useless Blender tutorials. I watched the thing, but I don't know what any of these things do. Why do I need both a sky texture and a sun? What does the sun addon do?

https://youtu.be/xnC2wrUGb6A How to use Lightning Sun position add-on in Blender

Ah, wait, I think I understand. If I look at the sky texture it is the same everywhere. So of course I'd need a sun light to give it directionality.

https://www.reddit.com/r/blenderhelp/comments/7y8oa1/interior_lighting_nightmare_always_too_dark/

///

Blender does not know when something is and interior, so it treats every scene like an outdoor scene, but there is a way to let blender know you are making an interior scene: portals. So how do you make a portal? You create an area light, put it on your window and in the properties you have to check the 'portal' option. You must do this for very window.

This should also remove noise, but I'd advise increasing the lamp strengths (using filmic blender) but if you get more noise use one of these techniques -set the clamp to a low value. This is the threshold value. All pixels with a value above the set clamp will be changed by blender. -reduce bounces. Either in the lamp properties or in the properties panel under the render tab -use the denoiser -increase the amount of samples -use lamps with a large size

I hope this helps

///

I did portals in Clarisse, but I honestly had no idea Blender requires this as well.

6:15pm. https://www.mapsofworld.com/lat_long/croatia-lat-long.html

I am still fiddling around with it. I completely forgot what the lattitude and longitude of Croatia are.

https://youtu.be/m9AT7H4GGrA The Secret Ingredient to Photorealism

https://docs.blender.org/manual/en/latest/render/cycles/light_settings.html

Light portals work by enabling the Portal option, and placing areas lights in windows, door openings, and any place where light will enter the interior.

So it is not specifically windows, but openings. I was wondering about that. I guess I'll put it smack dab in the middle then. Or actually, right at the very edge of the entrance.

6:25pm. Ah, let me watch the whole thing by Blender Guru. There is not helping it. His videos are so high quality.

https://youtu.be/m9AT7H4GGrA?t=179

Hmmm, really? All this is new to me.

https://youtu.be/m9AT7H4GGrA?t=298

This is what I've tried, but I knew that it can't be the solution.

https://youtu.be/m9AT7H4GGrA?t=441

He is saying the real sun is magnitudes brighter and the only reason it looks so bright is because of the crushed dynamic range.

https://youtu.be/m9AT7H4GGrA?t=592

I had no idea sRGB was bad. I've been using it for everything. I mean, because it was the default and I had no reason to change it.

https://youtu.be/m9AT7H4GGrA?t=695

Actually I think this is the default in never Blender.

View transform is filmic, but the display device and the sequencers are sRGB.

6:55pm. I'll assume the defaults are fine now.

https://youtu.be/m9AT7H4GGrA?t=1053

But the scene looks pretty dark...

https://youtu.be/m9AT7H4GGrA?t=1223

Do I have this mode in the current version?

https://youtu.be/m9AT7H4GGrA?t=1748

It seems Adobe has some document on blend modes and he is riffling through it. I am disappointed Google did not point me to it.

7:10pm. Let me try out false color. I've put in the portal as well earlier.

7:30pm. When is lunch. I guess I'll let this render for a while.

7:40pm. Done with lunch. And the render still looks like shit.

I think I really did make a mistake going into 3d. The noise level after 13m of this is still quite high.

7:55pm. Holy shit, these sky textures suck so bad. I can barely control them. I am using the sun position, but it puts the texture in the wrong place.

8pm. For some reason ambient occlussion makes the areas that should be dark brighter in Eevee. WTF?

8:15pm. The handle of that coffe cup was open and as a result had the wrong normals. I wish this was the worst problem I have right now.

https://blender.stackexchange.com/questions/213363/why-is-ambient-occlusion-creating-this-extra-light-eevee-includes-pic

8:25pm. Let me try it again. I'll try ramping up the light bounces and seeing whether that does anything.

https://blender.stackexchange.com/questions/6857/why-are-there-non-visible-objects-in-my-final-render

Why is Cycles rendering the hidden objects?

for ob in bpy.context.scene.objects: ob.hide_render = ob.hide_get()

This is super annoying.

Let me put this shortcut at the top of the journal. What I've done is disabled the transmission bounces.

8:45pm. No. If I jack up the diffuse rays at the expense of everything else, then sure, I am getting something, but the time it takes to render goes up drastically.

https://trinumedia.gumroad.com/l/sMNjc [Addon] Match Render Visibility (free)

9:10pm. Holy shit, I am so fucking pissed. Why is there an object in the middle of the doorway in rendered view? What the fuck Eevee?

Because I should have excluded the cutters from the view layer.

9:15pm. The match render visibility addon is a real timesaver here.

///

Ugh, I didn't have any experience with indoors scenes so I wasn't prepared for the rendering nightmare those grills would give me. I guess there is a reason every indoor scene that I've seen rendered in 3d had big windows. What is happening is that the noise is huge in the rendered image and the room is much darker than it should be to boot. Literally, the only way to render it in a sane amount of time on my rig will be to open up those grills wide because right now they are trapping all the light rays inside the room.

I wish I could post a screenshot, but it seems the Cycles ones are over 4mb, probably because of all the noise. Here is one in Eevee. Ambient occlusion makes some areas brighter so it does not get much points for realism. I am going to have to think what I want to do here. Since right now I just want to get some drawing practice in, maybe I'll just get hide the balcony doors and let the light stream in. After that I'll put some real effort into getting this to look good using non-physical based rendering.

///

9:30pm. Let me go with this. I am actually depressed now. But it does not really matter. Getting a good render out of this with Cycles was just a side thing. When it is time to take rendering seriously I'll check out Malt and put some real effort in making it look good. The reason why I need Cycles is so I can get accurate shadows for my drawing practice.

I am absolutely going to practice drawing starting tomorrow. This little obstacle won't hold me back."


Monday 2022-05-23 19:55:30 by GG

Hell yeah, Im gonna write all this shit from scratch again


Monday 2022-05-23 21:20:24 by Sam Blenny

Add fizzbuzz test

This is a joke, kinda? I wanted to see if I could build the equivalent of an if statement from only my next conditional return word. It works, but the contortions I had to go through to make that happen are a bit ridiculous. Definitely need to add more control flow words.


Monday 2022-05-23 22:21:08 by TweetTweet777

added initial files

i will give no description fuck you


Monday 2022-05-23 22:54:37 by Chaz "Gamerappa" Péloquin

fuck you, no 2013 html5 player

i've wasted like 3 hours of my life getting nothing to work, fuck this.


Monday 2022-05-23 23:44:47 by san7890

Update Comments and Adjusts Incorrect Variables for Map Defines and Map Config (#66540)

Hey there,

These comments were really showing their age, and they gave the false impression that nothing had changed (there was a fucking City of Cogs mention in this comment!). I rewrote a bit of that, and included a blurb about using the in-game verb for Z-Levels so people don't get the wrong impressions of this quick-reference comment (they always do).

I also snooped around map_config.dm and I found some irregularities and rewrote the comments there to be a bit more readable (in my opinion). Do tell me if I'm a cringe bastard for writing what I did.

Also, we were using the Box whiteship/emergency shuttle if we were missing the MetaStation JSON. Whoops, let's make sure that's fixed.

People won't have to wander in #coding-general/#mapping-general asking "WHAT Z-LEVEL IS X ON???". It's now here for quick reference, as well as a long-winded section on why you shouldn't trust said quick reference.


< 2022-05-23 >