Skip to content

Latest commit

 

History

History
1507 lines (1053 loc) · 84.7 KB

2020-03-01.md

File metadata and controls

1507 lines (1053 loc) · 84.7 KB

< 2020-03-01 >

1,566,409 events, 849,720 push events, 1,227,094 commit messages, 72,968,139 characters

Sunday 2020-03-01 00:22:26 by NeKitDS

[0.10.4] Remove several HTTPClient options.

Hello there. I know that message might seem kinda cliche for everyone reading it, but I honestly feel like I need to tell what this is all about. When I was creating gd.py, I did not intend to do anything harmful to the game or its servers. Just like that commit (85de5978e99f463a2f8a6ffcab8ebbfcc9265edc), I did not think about possible consequences of my actions. I have added several things just to basically trick the servers. Eventually, it turned out to bypass the security and cause many problems. gd.py was used to raid servers in a way that they just could not understand what is going on. I want to apologize for creating such API that allowed many people to easily raid different levels/users. I should have considered all the effects that my library can cause and what it can be actually used for.


Sunday 2020-03-01 00:44:36 by Saqib Ali

Job Application for Senior Data Analyst / Data Scientist at Splice.com. Data Scientist (Series C Fintech Startup) | Averity. Data Scientist - Computational Social Science | ACRONYM. Senior Data Scientist - Computational Social Science | ACRONYM. Predicting neighborhood change using big data and machine learning: Implications for theory, methods, and practice - Part 2 | Berkeley Institute for Data Science. Integrity Web Consulting Data Scientist - Web Engineer/Web Developer | SmartRecruiters. Job Application for Data Scientist at Dynamic Yield. Postdoctoral Appointee – Computational and Data Scientist in Lemont, IL for Argonne National Laboratory. Sirona Medical - Senior Data Scientist (Biomedical NLP). 3rd Annual Data Science Hackathon | Saint Mary's College, Notre Dame, IN.


Sunday 2020-03-01 01:50:10 by Steve Kondik

sys: Horrible hack for compat_sysinfo userspace confusion

  • The size of various entries in the sysinfo struct are 32-bit or 64-bit, depending on architecture. On Android, the entire media stack is running in 32-bit mode. It's been discovered that various proprietary pieces of this stack are using these values without considering the mem_unit flag. And I'm not surprised, they never would have seen anything other than 1 until we started getting devices with >4gb of RAM.

    This original code is totally fine. The problem is how the values get scaled. By default, it's going straight for PAGE_SIZE as the mem_unit. For a device with 6GB of RAM, we go from totalmem=6014545920 down to totalmem=1468395. If a calculation compares this with raw bytes (say 512MB) in order to change some behavior for a low-ram system, we are screwed.

    This is actually happening in the OnePlus camera driver. It's a Qualcomm-inherited bug, and easily fixable. Unfortunately we're not quite ready to stop using the prebuilt HAL. The result is that ZSL performs badly, ZSL-HDR is disabled, continuous shot is hacky, and various other de-featuring happens thanks to this. I suspect similar bugs are lurking elsewhere.

    To bandaid the shit out this problem, we'll shift in a loop with mem_unit monotonically increasing until the upper 32-bits are clear. On our 6GB system, this gives us something more sane where memunit=2 and totalmem=3007272960 and thus passes the 1GB / 512MB bytes comparisons.

Change-Id: Id796f928d0d217021458facb2dd9519900028cf8 Signed-off-by: Francisco Franco [email protected]


Sunday 2020-03-01 03:44:59 by Michal Hocko

oom: add helpers for setting and clearing TIF_MEMDIE

This patchset addresses a race which was described in the changelog for 5695be142e20 ("OOM, PM: OOM killed task shouldn't escape PM suspend"):

: PM freezer relies on having all tasks frozen by the time devices are : getting frozen so that no task will touch them while they are getting : frozen. But OOM killer is allowed to kill an already frozen task in order : to handle OOM situtation. In order to protect from late wake ups OOM : killer is disabled after all tasks are frozen. This, however, still keeps : a window open when a killed task didn't manage to die by the time : freeze_processes finishes.

The original patch hasn't closed the race window completely because that would require a more complex solution as it can be seen by this patchset.

The primary motivation was to close the race condition between OOM killer and PM freezer completely. As Tejun pointed out, even though the race condition is unlikely the harder it would be to debug weird bugs deep in the PM freezer when the debugging options are reduced considerably. I can only speculate what might happen when a task is still runnable unexpectedly.

On a plus side and as a side effect the oom enable/disable has a better (full barrier) semantic without polluting hot paths.

I have tested the series in KVM with 100M RAM:

  • many small tasks (20M anon mmap) which are triggering OOM continually
  • s2ram which resumes automatically is triggered in a loop echo processors > /sys/power/pm_test while true do echo mem > /sys/power/state sleep 1s done
  • simple module which allocates and frees 20M in 8K chunks. If it sees freezing(current) then it tries another round of allocation before calling try_to_freeze
  • debugging messages of PM stages and OOM killer enable/disable/fail added and unmark_oom_victim is delayed by 1s after it clears TIF_MEMDIE and before it wakes up waiters.
  • rebased on top of the current mmotm which means some necessary updates in mm/oom_kill.c. mark_tsk_oom_victim is now called under task_lock but I think this should be OK because __thaw_task shouldn't interfere with any locking down wake_up_process. Oleg?

As expected there are no OOM killed tasks after oom is disabled and allocations requested by the kernel thread are failing after all the tasks are frozen and OOM disabled. I wasn't able to catch a race where oom_killer_disable would really have to wait but I kinda expected the race is really unlikely.

[ 242.609330] Killed process 2992 (mem_eater) total-vm:24412kB, anon-rss:2164kB, file-rss:4kB [ 243.628071] Unmarking 2992 OOM victim. oom_victims: 1 [ 243.636072] (elapsed 2.837 seconds) done. [ 243.641985] Trying to disable OOM killer [ 243.643032] Waiting for concurent OOM victims [ 243.644342] OOM killer disabled [ 243.645447] Freezing remaining freezable tasks ... (elapsed 0.005 seconds) done. [ 243.652983] Suspending console(s) (use no_console_suspend to debug) [ 243.903299] kmem_eater: page allocation failure: order:1, mode:0x204010 [...] [ 243.992600] PM: suspend of devices complete after 336.667 msecs [ 243.993264] PM: late suspend of devices complete after 0.660 msecs [ 243.994713] PM: noirq suspend of devices complete after 1.446 msecs [ 243.994717] ACPI: Preparing to enter system sleep state S3 [ 243.994795] PM: Saving platform NVS memory [ 243.994796] Disabling non-boot CPUs ...

The first 2 patches are simple cleanups for OOM. They should go in regardless the rest IMO.

Patches 3 and 4 are trivial printk -> pr_info conversion and they should go in ditto.

The main patch is the last one and I would appreciate acks from Tejun and Rafael. I think the OOM part should be OK (except for __thaw_task vs. task_lock where a look from Oleg would appreciated) but I am not so sure I haven't screwed anything in the freezer code. I have found several surprises there.

This patch (of 5):

This patch is just a preparatory and it doesn't introduce any functional change.

Note: I am utterly unhappy about lowmemory killer abusing TIF_MEMDIE just to wait for the oom victim and to prevent from new killing. This is just a side effect of the flag. The primary meaning is to give the oom victim access to the memory reserves and that shouldn't be necessary here.

Signed-off-by: Michal Hocko [email protected] Cc: Tejun Heo [email protected] Cc: David Rientjes [email protected] Cc: Johannes Weiner [email protected] Cc: Oleg Nesterov [email protected] Cc: Cong Wang [email protected] Cc: "Rafael J. Wysocki" [email protected] Signed-off-by: Andrew Morton [email protected] Signed-off-by: Linus Torvalds [email protected]


Sunday 2020-03-01 04:40:09 by Findlay Smith

fix fucking CORS issues I hate fucking CORS so much fuck you CORS you piece of shit


Sunday 2020-03-01 05:18:46 by ravi-kumar12

Create IT’S MAGIC!

Sussutu is a world-renowned magician. And recently, he was blessed with the power to remove EXACTLY ONE element from an array.

Given, an array A (index starting from 0) with N elements. Now, Sussutu CAN remove only that element which makes the sum of ALL the remaining elements exactly divisible by 7.

Throughout his life, Sussutu was so busy with magic that he could never get along with maths. Your task is to help Sussutu find the first array index of the smallest element he CAN remove.

Input:

The first line contains a single integer N.

Next line contains N space separated integers Ak , 0 < k < N.

Output:

Print a single line containing one integer, the first array index of the smallest element he CAN remove, and -1 if there is no such element that he can remove!

Constraints:

1 < N < 105

0 < Ak < 109

SAMPLE INPUT 5 14 7 8 2 4 SAMPLE OUTPUT 1 Explanation Both 14 and 7 are valid answers, but since 7 is the smallest, the required array index is 1.


Sunday 2020-03-01 06:08:41 by Kristóf Umann

[analyzer][MallocChecker][NFC] Communicate the allocation family to auxiliary functions with parameters

The following series of refactoring patches aim to fix the horrible mess that MallocChecker.cpp is.

I genuinely hate this file. It goes completely against how most of the checkers are implemented, its by far the biggest headache regarding checker dependencies, checker options, or anything you can imagine. On top of all that, its just bad code. Its seriously everything that you shouldn't do in C++, or any other language really. Bad variable/class names, in/out parameters... Apologies, rant over.

So: there are a variety of memory manipulating function this checker models. One aspect of these functions is their AllocationFamily, which we use to distinguish between allocation kinds, like using free() on an object allocated by operator new. However, since we always know which function we're actually modeling, in fact we know it compile time, there is no need to use tricks to retrieve this information out of thin air n+1 function calls down the line. This patch changes many methods of MallocChecker to take a non-optional AllocationFamily template parameter (which also makes stack dumps a bit nicer!), and removes some no longer needed auxiliary functions.

Differential Revision: https://reviews.llvm.org/D68162


Sunday 2020-03-01 08:02:44 by osanotech

Update index.html

<title>OSANO</title> <script data-ad-client="ca-pub-1956484825193057" async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script>
OSANOTECH
  • herefor you!

BRYAN ......... or just call me  OSANO(Osanotech)

ABOUT ME

Hey, this is my homepage, so I have to say something about myself. Sometimes it is hard to introduce yourself because you know yourself so well that you do not know where to start with. Let me give a try to see what kind of image you have about me through my self-description. I hope that my impression about myself and your impression about me are not so different. Here it goes....

I am a person who is positive about every aspect of life. There are many things I like to do, to see, and to experience. I like to read and write codes; I like meeting new friends; I like to associate with people, I like sharing of ideas. I like to feel the music flowing on my face.I like to sleep late, I like to get up early; I hate to be alone, I like to be surrounded by people; I like motivational books and movies.

I always wanted to be a great writer, like mark zuckerberg who developed facebook, or like bill gates the co-founder of microsoft. They have influenced millions of people through their books. I also wanted to be a great psychologist, like William James or Sigmund Freud, who could read people’s mind. Of course, I am nowhere close to these people, yet. I am just someone who does some teaching, some research, and some writing. But my dream is still alive.

"Do not expect too much, and keep your sense of humor."


This is My EveryDays' work-through System
Pursue passions,Not Paychecks
Be Crafty,Lovable,Systematic, and Relentless
Don't take yourself too seriously
Read Every day.
Believe in Change
Invest in Ideas That Matters
Be Lazy and Dumb
I'am from KISII-Kenya.


call or whatsapp
the developer at 0769146217 or click here to get in touch with the developer

© Copyright 2020 OSANO TECH | All Rights Reserved THANKS FOR VISITING THIS SITE!


Sunday 2020-03-01 09:52:44 by Alex Cruz

Set scrollview on the power menu

So why? Because fuck you that's why...

No, you need this for if and when we decide to add more items to the power menu and the density is too high. Previously if you had more than 5 items, it would cut you off. So you either had to decide which 5 items you wanted or deal with the jank. That's no longer the case.

  • Added a landscape view so we can set a horizontal scrollview

  • Made the power menu dialog all one color. Josh and I talked about this and I previously made the case to keep it the same but after thinking it over, it looks better all one color.

Change-Id: I8ec4b1a85994251126433cea0640e000af78c65d


Sunday 2020-03-01 11:54:54 by Marko Grdinić

"8:10am. Got up 10m ago. It is time to chill.

9:05am. Done. Let me do some work. Today come the JS DOM tutorials. Let me go through the first one.

Damn I sure feel like slacking here.

10am. https://youtu.be/0ik6X4DJKCc?t=1940

Ok, this part is new to me. How does this work? ...Ah, he wraps it in quotes a moment later. Nevermind.

10:15am. The stuff here is super simple. Maybe I should have skipped this, but since I did say I am going to do it I might as well do it. Let me go through the 3 left and then I'll take a proper break. Hopefully I can hold myself from falling asleep.

10:20am. https://youtu.be/mPd2aJXCZ2g?t=382

What is going on here?

11am. Done with 2. Had to take break. Let me move onto 3. This is so boring, but I have to persevere. After I am done with this DOM stuff, I am going to move on to the framework vids. After I cross that mountain, I can finally move onto the backend.

11:20am. Rather than watching the video, I am thinking about Spiral.

Because that From: NearTo: idea, I've been thinking of how to fit it all together.

I am going to do one thing - I am going to bring back objects. This time they will be simpler than before - I'll simply use them for modules. One difference from records is that it will be possible to apply keyword args to them directly Range .From: a NearTo: b. Also unlike records which use immutable maps, I will implement then using dictionaries. This will make indexing into them faster at compile time.

11:25am. This is really great. This is how things should be done.

11:30am. Ever since I had that idea about big keyword arg uses, I've been thinking on and off on how to implement this and it is finally coming together.

12pm. https://www.youtube.com/watch?v=i37KVt_IcXw

I know I said I would do 4, and I certainly had the time for it, but let me stop here. I'll leave the last for after the break. I am going to fall asleep here at this rate.

12:55pm. Let me finish the Toji ep and then I will do the chores. After that comes the last video, and one of the framework courses. Also, it seems I forgot to commit. Let me do it here."


Sunday 2020-03-01 12:14:34 by petrero

31.1.Automatic Performance Checks: Builds{Hello Builds; Build "URLs to Test"; Build Log: blackfire-player}

Head back to https://blackfire.io, click "Environments" and click into our "Sasquatch Sightings Production" environment. Interesting. By default, it takes us not to the profiles tab... but to a tab called "Builds". And, look on the right: "Periodic Builds": "Builds are started every 6 hours"... which we could change to a different interval. Further below, there are a bunch of "notification channels" where you can tell Blackfire that you want to be notified - like via Slack - of the results of this "build" thingy.

Hello Builds Ok, what the heck is a build anyways? To find out, let's trigger one manually, then stand back and see what happens. Click "Start a Build". The form pre-fills the URL to our site... cool... and we can apparently give it a title if we want. Let's... just start the build. This takes us to a new page where.... interesting: it's running an "Untitled Scenario"... then it looks like it went to the homepage... and created a profile? Let's... back up: there are a lot of interesting things going on. And I love interesting things! First, we've seen this word "scenario" before! Earlier, we used the blackfire-player: a command-line tool that's made by the Blackfire people... but can be used totally outside of the profiling tool. We created a scenario.bkf file where we defined a scenario and used the special blackfire-player language to tell it to go to the homepage, assert a few things, then click on the "Log In" link and check something else:

20 lines scenario.bkf name "Various scenarios for the site"

override with --endpoint option

endpoint "https://localhost:8000" scenario name "Basic Visit" visit url("/") name "Homepage" expect status_code() == 200 expect css("tbody.js-sightings-list tr").count() > 10 # won't work until we're using Blackfire environment assert metrics.sql.queries.count < 30 click link("Log In") name "Login page" expect status_code() == 200 At that time, this was a nice way to "crawl" a site and test some things on it. The "build" used the same "scenario" word. That's not an accident. More on that soon.

Build "URLs to Test" The second important thing is that this profiled the homepage because, when we created our environment, we configured one "URL to test": the homepage. That's what the build is doing: "testing" - meaning profiling - that page. Let's add a second URL. One other page we've been working on a lot is /api/github-organization: this JSON endpoint. Copy that URL and add it as a second "URL to test". Click save... then manually create a second build. Like before, it creates this "Untitled Scenario" thing. Ah! But this time it profiled both pages! The build also shows up as green: the build "passed". This is a critical thing about builds. It's not simply that a build is an automated way to create a profile for a few pages. That would be pretty worthless. The real value is that you can write performance tests that cause a build to pass or fail. Check it out "1 successful constraint" - which is that "HTTP Requests should be limited to 1 per page". Hey! That's the test that we set up inside .blackfire.yaml!

6 lines .blackfire.yaml "tests": "HTTP Requests should be limited to 1 per page": path: "/.*" assertions: - "metrics.http.requests.count" The real beauty of tests is not that the "Assertions" tab will look red when you're looking inside a profile. The real beauty is that you can configure performance constraints that should pass whenever these builds happen. If a build fails - maybe because you introduced some slow code - you can be notified.

Build Log: blackfire-player But there's even more cool stuff going on. Near the bottom, click to see the "Player output". Woh! It shows us how builds work behind-the-scenes: the Blackfire server uses the blackfire-player! Look closer: it's running a scenario: visit url(), method 'GET', then visit url() of /api/github-organization. It's a bit hard to read, but this converted our 2 "URLs to test" into a scenario - using the same format as the scenario.bkf file - then passed that to blackfire-player. You can even see it reloading both pages multiple times to get 10 samples. That's one of the options it added in the scenario. So with just a tiny bit of configuration, Blackfire is now creating a build every 6 hours. Each time, it profiles these 2 pages and, thanks to our one test, if either page makes more than one HTTP request, the build will fail. By setting up a notification, we'll know about it. The fact that the build system uses blackfire-player makes me wonder: instead of configuring these URLs, could we instead have the build system run our custom scenario file? I mean, it's a lot more powerful: we can visit pages, but also click links and fill out forms. We can also add specific assertions to each page... in addition to our one "global" test about HTTP requests. The answer to this question is... of course! And it's where the build system really starts to shine. We'll talk about that next.

History & Graphs from Automated Builds But before we do, I want you to see what the build page looks like once it's had enough time to execute a few automated builds. Let's check out the SymfonyCasts environment. Woh! It's graph time! Because this environment has a history of automated builds, Blackfire creates some super cool graphs: like our cache hit percentage and our cache levels. You can see that my OPcache Interned Strings Buffer cache is full. I really need to tweak some config to increase that. I can also see how the different URLs are performing over time for wall time, I/O, CPU, Memory & network as well as other stuff. We can click to see more details about any build... and even look at any of its profiles. Anyways, next: let's make the build system smarter by executing our custom scenario.


Sunday 2020-03-01 12:14:34 by petrero

28.1.Database Tricks on SymfonyCloud

We just deployed to SymfonyCloud! Well, I mean, we did... but it doesn't... ya know... work yet. Because this is the production 500 error, we can't see the real problem. No worries! Head back to your terminal. The symfony command has an easy way to check the production logs. It is...

symfony logs

This prints a list of all the logs. The app/ directory is where our application is deployed to - so the first item is our project's var/log/prod.log file. You can also check out the raw access log... or everything. Hit 0 to "tail" the prod.log file. And... there it is:

An exception has occurred... Connection refused.

Adding a Database to SymfonyCloud I recognize this: it's a database error.... which... hmm... makes sense: we haven't told SymfonyCloud that we need a database! Let's go do that! Google for "SymfonyCloud MySQL" to find... oh! A page that talks about exactly that. Ok, we need to add a little bit of config to 2 files. The first is .symfony/services.yaml. This is where you tell SymfonyCloud about all the services you need - like a database service, ElasticSearch, Redis, RabbitMQ, etc. Copy the config for .symfony/services.yaml... then open that file and paste. The database is actually MariaDB, which is why the version here is 10.2: MariaDB version 10.2. Notice that we've used the key mydatabase. That can be anything you want: we'll reference this string from the other config file that we need to change: .symfony.cloud.yaml. Inside that file, we need a relationships key: this is what binds the web container to that database service. Let's see... we don't have a relationships key yet, so let's add it: relationships and, below, add our first relationship with a special string: database set to mydatabase:mysql. This syntax... is a little funny. The mydatabase part is referring to whatever key we used in services.yaml - and then we say :mysql... because that service is a mysql type. The really important thing is that we called this relationship database. Thanks to that SymfonyCloud will expose an environment variable called DATABASE_URL which contains the full MySQL connection string: username, host, database name and all:

29 lines .env DATABASE_URL=mysql://root:@127.0.0.1:3306/blackfire It's literally DATABASE_URL and not PIZZA_URL because we called the relationship database instead of pizza... which would have been less descriptive, but more delicious. This is important because DATABASE_URL happens to be the environment variable that our app will use to connect to the database. In other words, our app will instantly have database config.

Back at the terminal, hit Ctrl+C to exit from logging. Let's add the two changes and commit them:

git add . git commit -m "adding SfCloud database" Now, deploy!

symfony deploy Oh, duh - run with the --bypass-checks flag:

symfony deploy --bypass-checks The deploy will still take some time - it has a lot of work to do - but it'll be faster than before. When it finishes... it dumps the same URL - that won't change. But to be even lazier than last time, let's tell the command to open this URL in my browser... for me:

symfony open:remote Tunneling to the Database And... we have a deployed site! Woo! The database is empty... but if this were a real app, it would start to be populated by real users entering their real Bigfoot sightings... cause Bigfoot is... totally real. But... to make this a bit more interesting for us, let's load the fixture data one time on production.

This is a bit tricky because the fixture system - which comes from DoctrineFixturesBundle - is a Composer "dev" dependency... which means that it's not even installed on production. That's good for performance. If it were installed, we could run:

symfony ssh To SSH into our container, and then execute the command to load the fixtures. But... that won't work.

No problem! We can do something cooler. Exit out of SSH, and run:

symfony tunnel:open I love this feature. Normally, the remote database isn't accessible by anything other than our container: you can't connect to it from anywhere else on the Internet. It's totally firewalled. But suddenly, we can connect to the production database locally on port 30000. We can use that to run the fixtures command locally - but send the data up to that database. Do it by running:

DATABASE_URL=mysql://root:@127.0.0.1:30000/main php bin/console doctrine:fixtures:load Ok, let's break this down. First, there is actually a much easier way to do all of this... but I'll save that for some future SymfonyCloud tutorial. Basically, we're running the doctrine:fixtures:load command but sending it a different DATABASE_URL: one that points at our production database. When you open a tunnel, you can access the database with root user, no password - and the database is called main. The only problem is that this command... takes forever to run. I'm not sure exactly why - but it is doing all of this over a network. Go grab some coffee and come back in a few minutes. When it finishes... yes! Go refresh the page! Ha! We have a production site with at least enough data to make profiling interesting. Next, let's do that! Let's configure Blackfire on production! That's easy right? Just repeat the Blackfire install process on a different server... right? Yep! Wait, no! Yes! Bah! To explain, we need to talk about a wonderful concept in Blackfire called "environments".


Sunday 2020-03-01 12:14:34 by petrero

30.1.Production Profile: Cache Stats & More Recommendations{Profiles Belong to the Environment; Caching Information; Quality & Security Recommendations}

We just profiled our first page on production, which is using the Blackfire Server Id and Token for the environment we created.

Profiles Belong to the Environment Go to https://blackfire.io, click "Environments", open our new environment... and click the "Profiles" tab. Yep! Whenever anyone creates a profile using this environment's credentials, it will now show up here: the profile belongs to this environment. We haven't invited any other users to this environment yet, but if we did, they would immediately be able to access this area and trigger new profiles with their browser extension. If you go to back to https://blackfire.io to see your dashboard, the new profile also shows up here. But that's purely for convenience. The profile truly belongs to the environment. You can even see that right here. But Blackfire places all profiles that I create on this page... to make life nicer. Click the profile to jump into it. Of course... this looks exactly like any profile we created on our local machine. But it does have a few differences.

Caching Information Hover over the profile name to find... "Cache Information". We talked about this earlier: it shows stats about various different caches on your server and how much space each has available. Now that we're profiling on production, this data is super valuable! For example, if your OPcache filled up, your site would start to slow down considerably... but it might not be very obvious when that happens. It's not like there are alarms that go off once PHP runs out of OPcache space. But thanks to this, you can easily see how things really look, right now, on production. If any of these are full or nearly full, you can read documentation to see which setting you need to tweak to make that cache bigger.

Quality & Security Recommendations The other thing I want to show you is under "Recommendations" on the left. There are 3 types of recommendations... and we have one of each: the first is a security recommendation, the second is a quality recommendation and the third a performance recommendation. Only the performance recommendations come standard: the other two require an "Add on"... which I didn't have until I started using my organization's plan. As always, to get a lot more info about a problem and how to fix it, you can click the question mark icon.

Converting Recommendations into Assertions One of my favorite things about recommendations is that you can easily convert any of these into an assertion. If you click on assertions, you'll remember that we created one "test" that said that every page should have - at maximum - one HTTP request. We configured that inside of our .blackfire.yaml file: we added tests, configured this test to apply to every URL, and leveraged the metrics system to write an expression. Back on the recommendations, click to see more info on one of these... then scroll down. Every recommendation contains code that you can copy into your .blackfire.yaml file to convert that recommendation into a test... or "assertion". That might not seem important right now... because so far, it looks like doing that would simply "move" this from a "warning" under "Recommendations" to a "failure" under "Assertions"... which is cool... but just a visual difference. But! In a few minutes, we'll discover that these assertions are much more important than they seem. To see why, we need to talk about the key feature and superpower of environments: builds.


Sunday 2020-03-01 12:14:34 by petrero

29.1.Blackfire Environments{Hello: Environments; Understanding Organizations; Environment vs Personal Server Credentials; Configuring Blackfire on SymfonyCloud}

Now that our site is deployed - woo! - how can we get Blackfire working on it? Well... we already know the answer. If you find the Blackfire Install page... it makes it easy: I want to install on "a server"... and let's pretend it uses Ubuntu. Getting Blackfire installed on your production machine is as easy as running the commands below to install the Blackfire PHP extension - the Probe, install the Agent and configure the agent with our server id and token. Easy peasy!

Hello: Environments But.... some Blackfire account levels - offer a kick-butt feature called environments. If you have access to Blackfire environments - or if you're able to get a "plan" that offers environments, I highly recommend them.

Tip

Blackfire environments require a Premium plan or higher.

An environment is basically an isolated Blackfire account. When you have an environment, you send your profiles to that environment. The first advantage is that you can invite multiple people to an environment, which means that anyone can profile your production site and see other profiles made by people on your team. It also has other superpowers - ahem, builds - that really make it shine.

Understanding Organizations So let's create an environment! Go back to https://blackfire.io and click on the "Environments" tab. Actually, click on the "Organizations" tab... that's where this all starts. Blackfire organizations are a bit like GitHub organizations. With GitHub, you can subscribe to a "plan" directly on your personal account or you can create an organization, have it subscribe & pay for a plan, and then invite individual users to the organization. Blackfire organizations work exactly like that. And if you want to use environments, you need to create an organization and subscribe to a Blackfire plan through that organization. This did confuse me a bit at first. Basically, unless you just want the lowest Blackfire paid plan, you should probably always create an organization and subscribe to Blackfire through it. It just has a few more features than subscribing with your personal account.

Creating an Environment Anyways, I've already got an organization set up and subscribed to a plan. Once you have an organization, you can click into it to create a new environment. I already have one for SymfonyCasts.com production. Click to create a new one. Let's call it: "Sasquatch Sightings Production". For the "Environment Endpoint", it wants the URL to the site. Again, if this were a real project, I would attach a real domain... but copy the weird domain name, and paste. Select your timezone, sip some coffee, and... "Create environment" ! On the second step, it asks us to provide URLs to test... and it starts with just one: the homepage. We're going to talk more about this soon, so just leave it. I'll also uncheck the build notifications - more on those later.

Environment vs Personal Server Credentials Hit "Save settings" and... we're done! It rewards us with a shiny new "Server Id" and "Server Token". This is super important. No matter how you install Blackfire on a server, you eventually need to configure the "Server id" and "Server Token". This is basically a username & password that tells Blackfire which account a profile should be sent to. When you register with Blackfire, it immediately created a "Server Id" and "Server Token" connected with your personal account. We used that when we installed Blackfire on our local machine. But now that we have an environment, it has its own Server Id and token. The drop-down on the Install page is allowing us to choose which credentials we want to see on this page. Locally, we should still use our personal credentials: it keeps things cleaner. But on production, we should use the new environment's Server Id and Token. The install page gives us all the commands we need using those credentials. Oh, and by the way: if you have a "free" personal account... but are attached to an organization with a paid plan, any profiles you create with your personal Server Id and Token will inherit the features from that organization's plan. That lets us use our personal credentials locally and still get all the Blackfire features we're paying for. One exception to that rule, unfortunately, is "Add-Ons".

Configuring Blackfire on SymfonyCloud Ok, let's get our production machine set up. I'll select "Symfony Cloud" as my host... which takes me to a dedicated page on this topic. Let's see... step one is, instead of installing Blackfire with something like apt-get, we'll add a line to .symfony.cloud.yaml. I already have an extensions key... so just add blackfire. Boom! Blackfire is installed. Add this file to Git... and commit it:

git add . git commit -m "adding blackfire extension" The other step is to configure Blackfire. Once again, it has a drop-down to select between my personal credentials and credentials for an enivornment. Select our "Sasquatch production" environment. Cool! This gives us a command to set two SymfonyCloud variables. Copy that, move over, and paste:

symfony var:set BLACKFIRE_SERVER_ID=XXXXXX BLACKFIRE_SERVER_TOKEN=XXXXXX Ok... we're good! To make both changes take effect, deploy!

symfony deploy --bypass-checks I'll fast-forward. Once this finishes... move over and refresh. Ok... everything still works. Now, moment of truth: open the Blackfire browser extension and create a new profile. It's working! I'll call it: [Recording] First profile in production. Next, let's... look at this profile! It will contain a few new things and some data that is much more relevant now that we're on production.


Sunday 2020-03-01 12:14:34 by petrero

32.1.Builds with Custom Scenarios{Scenarios in .blackfire.yaml; Building the Custom Scenario; Per Page Assertions/Tests}

A few chapters ago, we created this scenario.bkf file. It's written in a special blackfire-player language where we write one or more "scenarios" that, sort of, "crawl" a web page, asserting things, clicking on links and even submitting forms. This a simple scenario: the tool can do a lot more. On the surface, apart from its name, this has nothing to do with the Blackfire profiler system: blackfire-player is just a tool that can read these scenarios and do what they say. At your terminal, run this file:

blackfire-player run scenario.bkf --ssl-no-verify That last flag avoids an SSL problem with our local web server. When we hit enter... it goes to the homepage, clicks the "Log In" link and... it passes.

Scenarios in .blackfire.yaml This is cool... but we can do something way more interesting. Copy the entire scenario from this file, close it, and open .blackfire.yaml. Add a new key called scenarios set to a | That's a YAML way of saying that we will use multiple lines to set this. Below, indent, then say #! blackfire-player. That tells Blackfire that we're about to use the blackfire-player syntax... which is the only format supported here... but it's needed anyways. Below, paste the scenario. Make sure it's indented 4 spaces The cool thing is that we can still execute the scenario locally: just replace scenario.bkf with .blackfire.yaml. The player is smart enough to know that it can look under the scenarios key for our scenarios.

blackfire-player run .blackfire.yaml --ssl-no-verify But if you run this... error!

Unable to crawl a non-absolute URI /. Did you forget to set an endpoint?

Duh! Our scenario.bkf file had an endpoint config:

20 lines scenario.bkf

override with --endpoint option

endpoint "https://localhost:8000" You can copy this into your .blackfire.yaml file. Or you can define the endpoint by adding --endpoint=https://localhost:8000:

blackfire-player run .blackfire.yaml --ssl-no-verify --endpoint=https://localhost:8000 Now... it works!

Building the Custom Scenario So... why did we move the scenario into this file? To find out, add this change to git... and commit it.

git add . git commit -m "moving scenarios into blackfire config file" Then deploy:

symfony deploy --bypass-checks Once that finishes... let's go see what changed. First, if we simply went to our site and manually created a profile - like for the homepage - the new scenarios config would have absolutely no effect. Scenarios don't do anything to an individual profile. Instead, scenarios affect builds. Let's start a new one: I'll give this one a title: "With custom scenarios". Go! Awesome! Now, instead of that "Untitled Scenario" that tested the two URLs we configured, it's using our "Basic visit" scenario! It goes to the homepage, then clicks "Log In" to go to that page. Yep, as soon as we add this scenarios key to .blackfire.yaml, it no longer tests these URLs. In fact, these are now meaningless. Instead, we're now in the driver's seat: we control the scenario or scenarios that a build will execute.

Per Page Assertions/Tests Even better, we have a lot more control now over the assertions - or "tests"... Blackfire uses both words - that make a build pass or fail. For example, the "HTTP requests should be limited to one per page" will be run against all pages in the scenarios - that's 2 pages right now. But the homepage also has its own assert: that the SQL queries on this page should be less than 30. If you look back at the build... we can see that assertion! We can even click into the profile, click on "Assertions", and see both there. So not only do we have a lot of control over which pages we want to test - even including filling out forms - but we can also do custom assertions on a page-by-page basis in addition to having global tests. I love that.


Sunday 2020-03-01 12:14:34 by petrero

27.3.Deploying to SymfonyCloud{Initializing your SymfonyCloud Project; Deploying & Security Checks}

Next, to tell SymfonyCloud that we want a new "server" on their system, run:

symfony project:create Every "site" in SymfonyCloud is known as a "project" and we only need to run this command once per app. You can ignore the big yellow warning - that's because I have a few other SymfonyCloud projects attached on my account. Let's call the project "Sasquatch Sightings" - that's just a name to help us identify it - and choose the "Development" plan. The development plan includes a free 7 day trial... which is awesome. You do need to enter your credit card info - that's a way to prevent spammers from creating free trials - but it won't be charged unless you run symfony project:billing:accept later to keep this project permanently. I already have a credit card on file, so I'll use that one. Once we confirm, this provisions our project in the background... I assume it's waking up thousands of friendly robots who are carefully creating our new space in... the "cloud". Hey! There's one now... dancing! And... done!

Deploying & Security Checks Ready for our first deploy? Just type:

symfony app:prepare:deploy --branch=master --confirm --this-is-not-a-real-command Kidding! Just run:

symfony deploy And... hello error! This is actually great. Really! The deploy command automatically checks your composer.lock file to see if you're using any dependencies with known security vulnerabilities. Some of my Symfony packages do have vulnerabilities... and if this were a real app, I would upgrade those to fix that problem. But... because this is a tutorial... I'm going to ignore this.

Our First Deploy Run the command again with a --bypass-checks flag:

symfony deploy --bypass-checks We still see the big message... but it's deploying! This takes care of many things automatically, like running composer install and executing database migrations. This first deploy will be slow - especially to download all the Composer dependencies. I'll fast-forward. It also handles setting up Webpack Encore... and even creates a shiny new SSL certificate. Those are busy robots! And... done! It dumped out a funny-looking URL. Copy that. In a real project, you will attach your real domain to SymfonyCloud. But this "fake" domain will work beautifully for us. Spin back over and pop that URL into your browser to see... a beautiful 500 error! Wah, wah. Actually, we're super close to this all working. Next, let's use a special command to debug this error, add a database to SymfonyCloud - yep, that's the piece we're missing - and load some dummy data over a "tunnel". Lots of good, nerdiness!


Sunday 2020-03-01 12:14:34 by petrero

33.1.Per-Page Time Metrics & Custom Metrics{Cautiously Adding Time-Based Assertions; Custom Metrics; Checking the Time-Based Metric}

We know that the scenario will be executed against our production server only. If we profiled a local page, this stuff has no effect. That means that the results of these profiles should have less variability. Not no variability: if your production server is under heavy traffic, the profiles might be slower than normal. But, it will have less variability than trying to compare a profile that you created on your local machine with a profile created on production: those are totally different machines and setups.

Tip

I also recommend adding samples 10 to each scenario. This will then use 10 samples (like normal Blackfire profiles) and further reduce variability:

visit url("/")
    name "Homepage"
    samples 10
    ...

Cautiously Adding Time-Based Assertions This means that you can... maybe add some time-based assertions... as long as you're conservative. For example, on the homepage, let's assert that main.wall_time < 100ms. By the way, most metrics start with metrics. and you can look on the timeline to see what's available. A few metrics - like wall time and peak memory - start with main.. Anyways, as you can see inside Blackfire, our homepage on production normally has a wall time of about 50ms... so 100ms is fairly conservative. But time-based metrics are still fragile. Doing this will likely result in some random failures from time-to-time.

Let's commit this:

git status git add . git commit -m "adding homepage time assertions" And deploy:

symfony deploy --bypass-checks

Custom Metrics While that's deploying, I want to show you a super powerful feature that we won't have time to experiment with: custom metrics. Google for "Blackfire metrics". In addition to the timeline, this page also lists all of the metrics that are available. But you can also create your own metrics inside .blackfire.yaml. In addition to tests and scenarios, we can have a metrics key. For example, this creates a custom metric called "Markdown to HTML". The real magic is the matching_calls config: any time the toHtml method of this made-up Markdown class is called, its data will be grouped into the markdown_to_html metric. That's powerful because you can immediately use that metric in your tests. For example, you could assert that this metric is called exactly zero times - as a way to make sure that some caching system is avoiding the need for this to ever happen on production. Or, you could check the memory usage... or other dimension. You can use some pretty serious logic to create these metrics: making it match only a specific caller for a function, OR logic, regex matching and ways to match methods, calls from classes that implement an interface and many other things. You can even create separate metrics for the same method based on which arguments are passed to them. They went a little nuts.

Checking the Time-Based Metric Anyways, let's check on the deploy. Done! Go back - I'll close this tab - and let's create a new build. Call it "With homepage wall time assert". Start build! And... it passes! This time we can see an extra constraint on the homepage: wall time needs to be less than 100ms. If it's greater than 100ms and you have notifications configured, you'll know immediately. Next: now that we have this idea of builds being created every 6 hours, we can do some cool stuff, like comparing a build to the build that happened before it. Heck we can even write assertions about this! Want a build to fail if a page is 30% slower than the build before it? We can do that.


Sunday 2020-03-01 12:40:32 by petrero

34.1.Testing a Build Compared to the Last Build{Adding a Comparison Test with percent(); Comparison Tests: Not for Manual Builds; Automatic Build on Deploy; Seeing the Compared Builds}

A long time ago in this tutorial, we talked about Blackfire's truly awesome "comparison" feature. If you profile a page, make a change, then profile it again, you can compare those two profiles to see exactly how that change impacted performance. When you use the build system, you can do the exact same thing... and you can even write "tests" that compare a build to the previous build. For example, you could say:

Yo! If the wall time on the homepage is suddenly 30% slower than the previous build, I want this build to fail.

Adding a Comparison Test with percent() How can we do that? It's dead simple. Add a new global metric - how about "Pages are not suddenly much slower" - and set this to run on every page: path: /.*. For the assertion, we can use a special function called percent: percent(main.wall_time) < 30%. That's it! There's also a function called diff(). If you said diff(metrics.sql.queries.count) < 2 it means that the difference between the number of SQL queries on the new profile minus the old profile should be less than 2.

Let's see what this looks like! Find your terminal and commit these changes:

git status git add . git commit -m "adding global wall time diff assert" Now... deploy!

symfony deploy --bypass-checks

Comparison Tests: Not for Manual Builds But... bad news. If we waited for that to finish deploying... and then triggered a new custom build... that test would not run. In fact, I want you to see that. Wait for the deploy to finish - okay, good - then move back over and start a build. This does what we expect: it executes our scenario and creates 2 profiles. Look at the 3 successful constraints for the homepage: we see the other global test about "HTTP requests should be limited"... but we don't see the new one. What gives? So... when you create a build, you can specify a "previous" build that it should be compared to by using an internal "build id". Our project is too new to see it, but this happens automatically with "periodic" builds: our comparison assertion will execute on periodic builds.

Tip

Triggering builds via a webhook requires an Enterprise plan.

But when we create a manual build... there's no way to specify a "previous" build... which is why the comparison stuff doesn't work. Fortunately, since I don't want to wait 12 hours to see if this is working, there is another way to trigger a build: through a webhook. Basically, if you want to create a build from outside the Blackfire UI, you can do that by making a request to a specific URL. And when you do that, you can optionally specify the "previous build" that this new build should be compared to.

Automatic Build on Deploy This webhook-triggered-build is especially useful in one specific situation: creating a build each time you deploy. If you did that correctly, your comparison assertion would compare the latest deploy to the previous deploy... which is pretty awesome. Because we're using SymfonyCloud, this is dead-simple to set up. Find the Blackfire SymfonyCloud documentation and, down here under "Builds", I'll select our environment. Basically, by running this command, we can tell SymfonyCloud to send a webhook to create a Blackfire build each time we deploy.

Copy it, move over to your terminal and... paste:

symfony integration:add --type=webhook --url='https://USER:[email protected]/api/v2/builds/env/aaaabbee-abcd-abcd-abcd-c49b32bb8f17/symfonycloud' Hit enter to report all events and enter again to report all states. For the environments - this is asking which SymfonyCloud environments should trigger builds. Answer with just master - I'll explain why soon. And... done! Let's redeploy our app. Oh, but before we do, refresh our builds page. Ok, we have 5 builds right now. Now run:

symfony redeploy --bypass-checks This should be pretty quick. Then... go refresh the page. Yes! A new build - number 6 - triggered by SymfonyCloud. And it passes. Awesome! Let's redeploy again:

symfony redeploy --bypass-checks When that finishes... there's build 7! But to see the comparison stuff in action, I need to do a real deploy so that the next build is tied to a new Git sha. I'll do a meaningless change, commit, then deploy:

git commit -m "triggering deploy" --allow-empty symfony deploy --bypass-checks

Seeing the Compared Builds Actually, I could have skipped changing any files and committed with --allow-empty to create an empty commit. When this finishes... no surprise! We have build 8! On this build, it's super cool: each profile has a "Show Comparison" link to open the "comparison" view of that profile compared to the same profile on the build from the last deploy - which - if you click "latest successful build" - is build 7. Back on build 8, click the "Show 4 successful constraints" link. There it is! We can see our "Pages are not suddenly much slower" assertion! It's comparing the wall time of this profile to the one from the last build. Click to open up the profile... and make sure you're on the Assertions tab. I love this: 2 page-specific assertions from the scenario, and 2 global assertions: one using the percent() function. The "Recommendations" also got a bit better: Blackfire automatically has some built-in recommendations using diff: this recommends that the new profile should have less than 2 additional queries compared to the last build. It looks like it failed... but that's just because the other part of this recommendation - not making more than 10 total queries - failed. Next: what about running builds on your staging server so you can catch performance issues before going to production? Or what about executing Blackfire builds on each pull request? We can totally do that - with a second environment.


Sunday 2020-03-01 13:00:47 by ilammy

Action: ObjCThemis

iOS build automation is not much easier than Android, but at least iOS Simulator on macOS supports x86. Thankfully, we are developing a library and for tests we do not need code signing. Otherwise we'd dealing with longstanding Apple policy of changing the way code signing works every 18 months.

However, most pain and suffering comes from the build systems popular for iOS/macOS development. Note that CocoaPods cache. It shaves off about 4 minutes and 850 MB of crap^W trunk reposistory that CocoaPods pulls. It still takes about three minutes to download and unpack but that's better than nothing. Though, we have to do it for every build. Maybe some day we'll invent a shared cache, but until then let's just ride upon Microsoft's generosity of providing free macOS runners. (Otherwise we would be paying $2.08 per build.)


Sunday 2020-03-01 13:03:17 by NewsTools

Created Text For URL [www.theguardian.com/lifeandstyle/2020/mar/01/im-in-love-with-my-wifes-best-friend-and-it-is-making-me-ill-mariella-frostrup]


Sunday 2020-03-01 14:44:37 by Infernio

SSS Rework FOMOD GUI

Depends on 190-de-wx-pt1, rewrites most of the GUI to use the wrappers instead. Drops a whole bunch of wx usages, which is nice. RadioButton needs wrapping, see all the ugly hacks at the bottom of gui_fomod. Also, the design that uses dict of wx objects to store group objects has to go, it's fundamentally hacky and very fragile - e.g. imagine if the wx guys decided to add slots to their objects.

Also contains a bunch of fixes and misc improvements, e.g. user-facing strings have been made translatable, some bugs that were carried over from belt have been fixed, and the 'Back' button no longer works on the first page.

Note the glaring TODOs - this is a straight up port of the original GUI, but we currently don't have a way to change fonts, which the original GUI relied on to differentiate its components. I added some HBoxedLayouts as an alternative, which works fine for the main FOMOD dialog and may even be an improvement in terms of visual clarity, but doesn't help at all with the results screen, which is now an unreadable mess.

Infernio: Updated for wrapped WizardDialog, gave fomods their own stored size.

Utumno: fomod_gui: comments to docstrings

Co-authored-by: MrD [email protected]


Sunday 2020-03-01 14:54:05 by warpreality

Merge pull request #1 from contemplator1998/fuck-your-docker

New settings files


Sunday 2020-03-01 15:00:10 by micha

doom2-pwad-eviternity: Imported version 1.0

Eviternity is a megawad comprised of six 5-map episodes (called Chapters) plus two secret maps. This project exclusively uses OTEX, a brand new high quality texture pack by ukiro.

Eviternity's six chapters explore a series of unique and varied themes, each featuring classic gameplay with an interest in making each map hold its own unique identity and personality. The themes are "Medieval", "Techbase", "Icy Castles", "Industrial / Brutalism", "Hell / Gore / Alien" and "Heaven".

This project was created as a birthday gift to Doom, which is celebrating its 25th birthday the day this was first released ("RC1", Released on December 10th, 2018. The texture pack used in this project, OTEX, was also released on the same day - so please do not use Eviternity as a base for your wads & mods. While mostly being a "Dragonfly project", with 24 maps being made or heavily worked on by myself, I present to you a mighty lineup of well-known guest mappers who have crafted beautiful and fun levels.


Sunday 2020-03-01 16:01:54 by MrD

Fix for Double binding of refresh and small stuff from wx future merge:

Mopy/bash/basher/init.py: the RefreshData call will call BindRefresh inside balt conversation. We were binding refresh twice. I think this was ignored by wx as seen (?) by this example program:

import wx

class Frame(wx.Frame): def init(self): wx.Frame.init(self,None) self.Bind(wx.EVT_ACTIVATE, self.OnSize) self.Bind(wx.EVT_ACTIVATE, self.OnSize)

def OnSize(self,event): print (event)

app = wx.PySimpleApp() frame = Frame().Show() app.MainLoop()

The event is printed once per event. Our new framework would call the RefreshData twice however, leading to all kinds of weirdness. High time for some guards - implemented later.

Infernio: popping in to clarify the above point re: wx double bindings. When you try to bind twice in wx, it simply replaces the first binding with the second one. This is really inconvenient if you want to, say, display a tooltip when textboxes are made too small via resizing, but also want to do some special behavior for one specific text box when it's resized. You could only do that via hacks in wx's event framework, but it works just fine in ours :) Anyway, back to your regularly scheduled programming...

The early booting phase needs revisiting (done some attempts in "Late binding of RefreshData") and event handling may be related to the weird loosing of focus after the the progress dialog on booting - or it may be related only to that dialog being now native. More investigation needed

Small stuff from wx-begone merge:

Mopy/bash/basher/init.py: warning fixup

Mopy/bash/basher/installer_links.py: isSingle renames

Mopy/bash/bosh/omods.py: rename local 'sizes' to single out balt.sizes

Miscellaneous:

Mopy/bash/basher/mod_links.py: warning fixups

Mopy/bash/bosh/converters.py: rename BCF pack - this is from records ongoing refactoring

Mopy/bash/env.py: rename "splitter'

Mopy/bash/belt.py: dropped StringIO, not needed

bass: Add string prefixes to bass and fixup by Infernio in scripts/build.py:

Prefixing the AppVersion string also needs this build script edit, otherwise nightlies will always get plain '307' assigned to bass.AppVersion.

Mopy/bash/bosh/init.py: Debug print the game INI path SSS

Due to the 'IOError [Errno 22]' that keeps coming up, knowing the path that's failing would be useful.

Mopy/bash/bosh/_mergeability.py: Minor fixup to sync CBash and PBash

Follow-up for 2cdc1ac87934d2ebaa16226ec4ed4b67d9816d82

Under #190

Co-authored-by: Infernio [email protected]


Sunday 2020-03-01 16:37:09 by nycz

Initial version of gui package

Infernio: Had to squash a whole bunch of these together in order to have the resulting commit not break dev. Introduces a decent first design for the GUI package featuring layouts, buttons and some text components.

Move layouts to new gui package with fully wrapped classes

The goal is to replace balt with this, fully encapsulating the wx classes inside.

Add regular buttons to new gui package

The onButClickEventful argument is not yet implemented, but it's only used in one class (the ColorPicker dialog) so it isn't a massive concern. It's waiting for better event handling before it can be implemented.

Add ToggleButton and CheckBox to gui, plus more

The main widget API now uses properties instead of getters and setters.

Add text edit fields to gui

Still to do: * styles (no border, sunken border) * fonts (mostly monospace) * event binding (on_lose_focus) * a lot of documentation

WIP - Add labels to gui

again, this is not even close to done. barely more than a stash

Cleanup some leftovers from nycz's initial work

Rename abstract classes to fit WB style

We use 'A' instead of 'Abstract'. Also renamed Widget to _AWidget since instantiating that class doesn't seem useful.

Import directly in balt

Leads to slightly cleaner looking code. Also deleted the remnants of the StaticText class.

Create gui.DeselectAllButton

Make TextCtrl wrappers more consistent

There is no reason for anything but TextArea to ever have a 'wrap' parameter. However, the 'modified' property is useful for both TextAreas and TextFields, so let's move it to the abstract class.

Remnants of balt.StaticText -> gui.Label transition

Rename abstract classes in layouts.py to fit

Carry over wx3 SendSizeEventToParent edit

Was the only actually needed one from the wx3 TEMP commit

layouts: Use SetSizer by default

Fixes dialogs opening way too small and not remembering their size

Add docstrings and typing

Also removed some obsolete labels in patcher_dialog.py.

Rename 'text' and 'name', both forbidden names

Get rid of all 'text' usages in the new GUI code

Forbidden name, see wiki.

Get rid of all 'name' usages in the new GUI code

Fobidden name, see wiki. I almost definitely missed some here.

Rename some setter parameters

To avoid forbidden names etc.

Reintroduce support for some missing features

  1. Hiding text input borders

Also turns the INI details name back into a read-only text area again (works better for small screens).

  1. StaticText.Rewrap (now Label.rewrap)

Cut out some parts of this that I don't think are needed, but we'll have to see.

Drop noAutoResize

When in doubt, leave it out - can't find any breakage from not supporting this, so dropping it.

Remove HideNativeCaret() usage

Doesn't even seem to work, and why are we even doing this in the first place??

Split into modules earlier

gui/init.py should become a central import point, will significantly reduce commit noise and simplify the API usage (in exchange for a very painful conflict resolution that I'll now have to slug through...).

Notes for these squashed modules:

Move text-related classes into gui/text_components.py

gui/init.py is starting to become pretty large and there's no reason to limit ourselves to one file anyways.

Also randomly noticed that the copyright dates were still from 2015 here.

Move button classes into gui/buttons.py

Since these are going to expand soon (BitmapButton), now felt like a good time to do it.

Some minor improvements to layouts.py

Fix 'Modified' field not being editable

Got the incorrect assignment (False instead of True), but I dropped the entire assignment instead. Don't know why it was there in the first place.

Add string prefixes to gui

scripts/build/installer/macros.nsh: Drop empty 'gui' folder in standalone

FFF this into the first wx-begone merge, whenever we're ready for that ;)

Co-authored-by: Infernio [email protected]


Sunday 2020-03-01 17:15:23 by nycz

Initial version of gui package

Infernio: Had to squash a whole bunch of these together in order to have the resulting commit not break dev. Introduces a decent first design for the GUI package featuring layouts, buttons and some text components.

Move layouts to new gui package with fully wrapped classes

The goal is to replace balt with this, fully encapsulating the wx classes inside.

Add regular buttons to new gui package

The onButClickEventful argument is not yet implemented, but it's only used in one class (the ColorPicker dialog) so it isn't a massive concern. It's waiting for better event handling before it can be implemented.

Add ToggleButton and CheckBox to gui, plus more

The main widget API now uses properties instead of getters and setters.

Add text edit fields to gui

Still to do: * styles (no border, sunken border) * fonts (mostly monospace) * event binding (on_lose_focus) * a lot of documentation

WIP - Add labels to gui

again, this is not even close to done. barely more than a stash

Cleanup some leftovers from nycz's initial work

Rename abstract classes to fit WB style

We use 'A' instead of 'Abstract'. Also renamed Widget to _AWidget since instantiating that class doesn't seem useful.

Import directly in balt

Leads to slightly cleaner looking code. Also deleted the remnants of the StaticText class.

Create gui.DeselectAllButton

Make TextCtrl wrappers more consistent

There is no reason for anything but TextArea to ever have a 'wrap' parameter. However, the 'modified' property is useful for both TextAreas and TextFields, so let's move it to the abstract class.

Remnants of balt.StaticText -> gui.Label transition

Rename abstract classes in layouts.py to fit

Carry over wx3 SendSizeEventToParent edit

Was the only actually needed one from the wx3 TEMP commit

layouts: Use SetSizer by default

Fixes dialogs opening way too small and not remembering their size

Add docstrings and typing

Also removed some obsolete labels in patcher_dialog.py.

Rename 'text' and 'name', both forbidden names

Get rid of all 'text' usages in the new GUI code

Forbidden name, see wiki.

Get rid of all 'name' usages in the new GUI code

Forbidden name, see wiki. I almost definitely missed some here.

Rename some setter parameters

To avoid forbidden names etc.

Reintroduce support for some missing features

  1. Hiding text input borders

Also turns the INI details name back into a read-only text area again (works better for small screens).

  1. StaticText.Rewrap (now Label.rewrap)

Cut out some parts of this that I don't think are needed, but we'll have to see.

Drop noAutoResize

When in doubt, leave it out - can't find any breakage from not supporting this, so dropping it.

Remove HideNativeCaret() usage

Doesn't even seem to work, and why are we even doing this in the first place??

Split into modules earlier

gui/init.py should become a central import point, will significantly reduce commit noise and simplify the API usage (in exchange for a very painful conflict resolution that I'll now have to slug through...).

Notes for these squashed modules:

Move text-related classes into gui/text_components.py

gui/init.py is starting to become pretty large and there's no reason to limit ourselves to one file anyways.

Also randomly noticed that the copyright dates were still from 2015 here.

Move button classes into gui/buttons.py

Since these are going to expand soon (BitmapButton), now felt like a good time to do it.

Some minor improvements to layouts.py

Fix 'Modified' field not being editable

Got the incorrect assignment (False instead of True), but I dropped the entire assignment instead. Don't know why it was there in the first place.

Add string prefixes to gui

scripts/build/installer/macros.nsh: Drop empty 'gui' folder in standalone

FFF this into the first wx-begone merge, whenever we're ready for that ;)

Co-authored-by: Infernio [email protected]


Sunday 2020-03-01 17:15:23 by Infernio

Wrap wx.html2.WebView as WebViewer

Nicely hides the html2 import ugliness and can abstract away some annoyances (e.g. disabling the forward / back buttons, turning file paths into 'file:' URLs, etc.).

Had to do the reload button using a really ugly hack. Also, until we're on wx4, everything but the doc browser will have a single annoying 'about:blank' entry in its history (the doc browser takes long enough to load that it doesn't happen there). On wx4, we can clean those entries out of the history manually.

Note: I switched StartURL for webbrowser.open - didn't want gui to import windows.py, that sounded like a really weird dependency.

Add string prefixes to gui


Sunday 2020-03-01 17:30:51 by lawrence910426

Fuck your smart pointer

More like dumb and buggy pointer. I would rather to make a smart_pointer my own


Sunday 2020-03-01 19:06:30 by Irradiation

Renamed every instance of "plastique" (c4 explosives) to "c4" (#25924)

  • Renamed every instance of "plastique" (c4 explosives) to "c4"

This is in the name of every admin out here and anybody doing testing. Fuck you old c*ders.

  • fuck you plosky and old test map nobody uses

  • PLOOOSKKYYYYY


Sunday 2020-03-01 21:01:03 by Sam Lantinga

Fixed bug 4369 - Going fullscreen with green knob in MacOS freezes app for 15 seconds.

Elmar

creating a fullscreen window with SDL_CreateWindow(..SDL_WINDOW_FULLSCREEN_DESKTOP..) in MacOS works fine, except if it was triggered by the user with the green knob in the top left window title bar.

Then "something" is different, and SDL_CreateWindow hangs for 15-20 seconds (tested in MacOS 10.13 and 10.14).

Responsible for the hang is this code in SDL_cocoawindow.m - Cocoa_SetWindowFullscreenSpace:

    const int maxattempts = 3;
    int attempt = 0;
    while (++attempt <= maxattempts) {
        /* Wait for the transition to complete, so application changes
         take effect properly (e.g. setting the window size, etc.)
         */
        const int limit = 10000;
        int count = 0;
        while ([data->listener isInFullscreenSpaceTransition]) {
            if ( ++count == limit ) {
                /* Uh oh, transition isn't completing. Should we assert? */
                break;
            }
            SDL_Delay(1);
            SDL_PumpEvents();
        }
        if ([data->listener isInFullscreenSpace] == (state ? YES : NO))
            break;
        /* Try again, the last attempt was interrupted by user gestures */
        if (![data->listener setFullscreenSpace:(state ? YES : NO)])
            break; /* ??? */
    }

One trivial workaround is to change 'const int limit = 10000' to 500. Then the freeze is so short that it doesn't look like a freeze to the user.

Looking further into the problem, I observed that the function Cocoa_SetWindowFullscreenSpace recursively calls itself via some ObjectiveC messages. I managed to extract a callstack for this (copied below): Note how Cocoa_SetWindowFullscreenSpace in stack line 22 calls SDL_PumpEvents, which eventually arrives at SDL_SendWindowEvent, which calls SDL_UpdateFullscreenMode (stack line 0), which then calls Cocoa_SetWindowFullscreenSpace again (not shown). This recursive second call is the one that hangs.

Another "solution" that worked for me was to add a flag to SDL_Window that is set in Cocoa_SetWindowFullscreenSpace and causes this function to return immediately if called from itself.

Obviously, this is also an ugly hack, but I don't have enough time to dive into this crazy Cocoa/ObjectiveC business deep enough to find a proper solution. But hopefully it's easy for one of the experts around.

Note that there is a "failure to go fullscreen"-message involved, maybe using the green knob causes this failure at first.

I can unfortunately not provide a minimum example.

Best regards, Elmar

0 com.yasara.View 0x00000001007495af SDL_UpdateFullscreenMode + 207 1 com.yasara.View 0x00000001006e2591 SDL_SendWindowEvent + 401 2 com.yasara.View 0x0000000100775a72 -[Cocoa_WindowListener windowDidResize:] + 370 3 com.yasara.View 0x0000000100776550 -[Cocoa_WindowListener windowDidExitFullScreen:] + 512 4 com.apple.AppKit 0x00007fff3180a2a4 -[_NSWindowEnterFullScreenTransitionController failedToEnterFullScreen] + 692 5 com.apple.AppKit 0x00007fff31c59737 -[_NSEnterFullScreenTransitionController _doFailedToEnterFullScreen] + 349 6 com.apple.AppKit 0x00007fff3172aa53 __NSFullScreenDockConnectionSendEnterForSpace_block_invoke + 135 7 libxpc.dylib 0x00007fff6114b9b1 _xpc_connection_reply_callout + 36 8 libxpc.dylib 0x00007fff6114b938 _xpc_connection_call_reply_async + 82 9 libdispatch.dylib 0x00007fff60ec7e39 _dispatch_client_callout3 + 8 10 libdispatch.dylib 0x00007fff60ede3b0 _dispatch_mach_msg_async_reply_invoke + 322 11 libdispatch.dylib 0x00007fff60ed2e25 _dispatch_main_queue_callback_4CF + 807 12 com.apple.CoreFoundation 0x00007fff33d39e8b CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE + 9 13 com.apple.CoreFoundation 0x00007fff33d3959a __CFRunLoopRun + 2335 14 com.apple.CoreFoundation 0x00007fff33d38a28 CFRunLoopRunSpecific + 463 15 com.apple.HIToolbox 0x00007fff32fd1b35 RunCurrentEventLoopInMode + 293 16 com.apple.HIToolbox 0x00007fff32fd1774 ReceiveNextEventCommon + 371 17 com.apple.HIToolbox 0x00007fff32fd15e8 _BlockUntilNextEventMatchingListInModeWithFilter + 64 18 com.apple.AppKit 0x00007fff3128deb7 _DPSNextEvent + 997 19 com.apple.AppKit 0x00007fff3128cc56 -[NSApplication(NSEvent) _nextEventMatchingEventMask:untilDate:inMode:dequeue:] + 1362 20 com.yasara.View 0x000000010076fab2 Cocoa_PumpEvents + 290 21 com.yasara.View 0x00000001006dd1c7 SDL_PumpEvents_REAL + 23 22 com.yasara.View 0x00000001007795cf Cocoa_SetWindowFullscreenSpace + 223 23 com.yasara.View 0x000000010074970b SDL_UpdateFullscreenMode + 555 24 com.yasara.View 0x00000001006e2476 SDL_SendWindowEvent + 118 25 com.yasara.View 0x0000000100774ff7 -[Cocoa_WindowListener resumeVisibleObservation] + 135 26 com.yasara.View 0x000000010077664c Cocoa_ShowWindow + 188 27 com.yasara.View 0x0000000100749492 SDL_FinishWindowCreation + 546 28 com.yasara.View 0x0000000100748da5 SDL_CreateWindow_REAL + 1573 29 com.yasara.View 0x000000010010d9b1 vga_setvideomode + 1347 30 com.yasara.View 0x00000001003f0d46 mod_initscreen + 2614 31 com.yasara.View 0x00000001003f344b mod_reinitscreen + 460 32 com.yasara.View 0x00000001003f370d mod_resizescreen + 383 33 com.yasara.View 0x0000000100418e39 mod_main + 815 34 com.yasara.View 0x000000010029ca5d main2 + 5766 35 com.yasara.View 0x000000010011d1b7 main.main_cpuok + 19


Sunday 2020-03-01 21:11:26 by CoolDudde4150

emptied because noone cares about this piece of garbage shit ass dum ass idiot head


Sunday 2020-03-01 21:25:39 by Buddy Burden

I FINALLY figured out how to keep ssh from fucking up my terminal window names!!!

this is a bit of a hack, but it is totally worth it to stop this stupidly annoying problem


Sunday 2020-03-01 22:10:11 by Firehawke

December Apple update per usual (#6078)

  • New working software list additions

apple2_flop_orig: Koronis Rift, Tangled Tales, Nord and Bert Couldn't Make Head or Tail of It (Release 19 / 870722), Drug Alert!, In Search of the Most Amazing Thing [4am, Firehawke]

apple2_flop_clcracked: The Boy Jesus (cleanly cracked) [4am, Firehawke]

Also correct several titles via anoid PM...

  • New working software list additions

apple2_flop_orig: Dinosaur Days Plus!, Now You See It, Now You Don't - Was it there? Was it missing?, Into The Eagle's Nest, Ecology Simulations II, Thrilogy [4am, Firehawke]

apple2_flop_clcracked: In Search of the Most Amazing Thing (First Revision) (cleanly cracked) [4am, Firehawke]

  • Slight adjustment to the description on the Apple II softlists (nw)

  • New working software list additions


apple2_flop_orig: Space Quest: The Sarien Encounter, Portal, Earth Orbit Stations, Adventure [4am, Firehawke]

  • apple2_flop_clcracked: Replace Music Construction Set (cleanly cracked) to fix damaged sector.

  • Remove A2 misc dump of Marble Madness. Second disk is dupe of first. No cracktro or other reason to keep this over the cleanly cracked copy. (nw)

  • New working software list additions


apple2_flop_orig: SwordThrust, The Hunt for Red October, Galactic Attack, Journey (version 16), Southern Command, Wizardry IV: The Return of Werdna, Arthur: The Quest for Excalibur, The Bard's Tale [4am, Firehawke]

apple2_flop_clcracked: MicroChess (Version 2.0) (cleanly cracked). The Spy's Adventures in North America (Version 1987-10-31) (cleanly cracked) [4am, Firehawke]

  • New working software list additions

apple2_flop_orig: Adventure Construction Set, The Ancient Art of War, Borg [4am, Firehawke]

apple2_flop_clcracked: Magic Mailer (Version 1.1) (cleanly cracked), Mind Over Minors (cleanly cracked), Temple of Apshai (cleanly cracked) [4am, Firehawke]


Sunday 2020-03-01 23:00:52 by Sam Lantinga

Fixed bug 4996 - Mac: XBoxOne Bluetooth rumble isn't working

rofferom

I have an annoying issue on MacOS about XBoxOne Bluetooth rumble (Vendor: 0x045e, Product: 0x02fd).

When 360controller is installed, rumble is working correctly. However, Bluetooth rumble isn't working at all, with or without 360controller installed (although it is working with Chrome + https://html5gamepad.com).

I looked at the code, and it seems that XBox controllers are managed in MacOS in this file: SDL_hidapi_xbox360.c. The XBoxOne file is disabled for MacOS in SDL_hidjoystick_c.h.

The function HIDAPI_DriverXbox360_Rumble() is called correctly, and hid_write() returns no error.

I have tried a stupid test. I took the rumble packet from 360controller: https://github.com/360Controller/360Controller/blob/ec4e88eb2d2535e9b32561c702f42fb22b0a7f99/XBOBTFF/FFDriver.cpp#L620. With the patch I have attached, I manage to have rumble working on Bluetooth (with some stupid vibration level, but it proves it can if the packet is changed).

But it breaks the USB rumble with 360controller. A comment in the function makes an explicit reference to 360controller, I think that's why I have broken this specific usecase.

I don't know what is the correct way to fix this, but it seems that the current implementation has a missing case for Bluetooth support.

Note that I also tested master this morning, and I have another issue: if (!device->ffservice) { return SDL_Unsupported(); }

test fails in DARWIN_JoystickRumble(). This test has been done quickly, I'm not totaly confident about its accuracy.


< 2020-03-01 >