All posts by Andreas Heigl

About Andreas Heigl

father, husband, developer, architectural daughtsman, brother, master of forestry sciences, scout, rescue diver

Git is awesome

I just recently fat-fingered a branch-deletion of a remote branch. But luckily git has you covered should you do that. Let me tell you the story…

I don’t know why on a lot of keyboards the letters D and F are right next to each other (well, I know but that’s a different story). So far that never was an issue.

But! If you start sloppy typing and while hitting the F key you also hit the D that usually just means you will have to use the backspace and delete a character you didn’t want.

Unless you are on the CLI and hit ENTER immedately after …

What happened:

I was working on a branch and commited some stuff to it. As I already had a PR open for it on github I pushed the change.

Of course the CI found a minor thing in the code. I was casting something that was unnecessary. So I removed that cast in the code and commited that as well to my local branch. As it was a really minor change that I should have done with the previous commit already I decided to do a git commit --amend --no-edit . Just add that to the previous commit and be done.

That now replaced the last commit with a new one and I had to force-push that to the remote branch.

And now I fat fingered.

Instead of git push origin branchname -f I typed git push origin branchname -df

And the -d means: Delete that branch on the remote server.

I mean it’s not that much of a loss. I could have just used git push origin branchname again and be done.

But with deleting the remote branch I also closed the PR. And just pushing the branch again would require me to create a new branch instead of being able to reopen the old one. Why? because the old PR was associated with the old commit-hash. But I now had a new commit-hash.

So how could I fix that?

git reflog to the rescue! While still on the local branch I issued this:

git reflog
4fcb8ff4d (HEAD -> branchName, origin/branchName) HEAD@{0}: commit (amend): Commit Message
0cc696b53 HEAD@{1}: commit: Commit Message
458011b1b HEAD@{2}: rebase (continue) (finish): returning to refs/heads/branchName

So the great thing here is that we not only have the log of our last commits (which would only show one entry for the Commit Message` commit. But we have both commit hashes. How cool is that!

That allowed me now to do git push origin 0cc696b53:branchName which pushed the commit 0cc696b53 to the server and named the resulting branch branchName . That caused GitHub to realize that the branch is still existing and allowed me to reopen the PullRequest.

So now we are almost in the same situation as before my fat fingered stupidity.

The only thing left to do now is to actually push (and not delete) the branch to the server.

And a git push origin branchName -f (no -d) later the branch is updated and the PullRequest knows about the update and CI is up and running.

Thank you git for saving my back!

composer 🧡 phar

It bothered me for a long time that installing tools via composer cluttered my projects with unnecessary dependencies and also bind my code to the dependencies of my development toolchain and vice-versa.

The easy way to solve that was to use phar-files for the tools I am using in my development chain. So tools like phpunit, phpstan, psalm or phpcs/phpcbf. All of these can be installed via composer require --dev – but also via phive install.

The trouble though when using the phar-files was, that composer didn’t know about them and whenever I wanted to use a plugin for one of those tools, composer didn’t know that the tool was already there and so installed the tool again. Which wasn’t helpful!

I was thinking about multiple ways to handle that. Like a plugin for composer to remove installed PHAR-files from the internal resolver-tree and what other ideas I had. All of these didn’t really work out.

Until a few days it hit me: composers replace config-option!

So what did I do:

After installing my tool – in this case php-codesniffer – via phive install phpcs --copy I created a new composer.json file in the .phive-folder with the following content:

{
    "name": "myproject/phive_stuff",
    "description": "A replacement package for phars",
    "minimum-stability": "stable",
    "license": "MIT",
    "replace" : {
        "squizlabs/php_codesniffer": "*"
    }
}

Now I added this code to my projects composer.json file:

{
    "repositories": [{
        "type": "path",
        "url": ".phive/"
    }]
}

Then all that was left to do was to require the new package via

composer require --dev myproject/phive_stuff 

With all that done I can now install plugins for php-codesniffer via

composer require --dev phpcompatibility/php-compatibility

and composer will realize that php-codesniffer is already installed and not install it again.

Caveat

This has some caveats though! For example there are two tools phpcs and phpcbf that need to be installed via phive while requiring squizlabs/php-codesniffer will install both of them.

Due to the way phive works, the binaries are by default linked into the project from a main folder outside the project which can break when using docker. That’s why I usually call phive with the --copy flag as that actually adds a copy of the phar to the tools -folder.

Due to this linking phpcs suddenly created its config-file in that shared folder which had some unexpected sideeffects. When using --copy the confi is added by default to the tools folder.

So there might be some extra work necessary when using PHARs. But at least it works now 😁

Further ideas

My main idea now was to automate this manual process as that is something that can automatically done by phive when installing (or updating) a tool.

Would that something that helps others as well? Feel free and leave your comment in the feature-request on github

Tweaking a WordPress blog for the fediverse

The Fediverse is taking off. Slightly. Not sure yet whether it’s similar to “Linux on Desktop” but no matter what it’s all about federation. And making it easier to get content right into the timeline of people is worth investigating.

So I decided to try the ActivityPub plugin and see where it leads me.

Installation of the plugin is straight forward. Head to “new Plugins”, search for ActivityPub, install and activate it.

The cool thing is: That’s it!

At least when you have setup your WordPress blog out of the box.

You can now follow the author on the fediverse by checking for @[authorname]@[blog-URI] . So in my case that would be @heiglandreas@andreas.heigl.org

And of course in my case it didn’t work out-of-the-box. Why should it.

Why? Well: Two reasons:

  • One was that I use the Yoast SEO plugin which by default (or did I actually set it up that way?) did redirect requests to the author-page back to the main website. Which is kind of counter productive when you want information about the author. So I changed those settings (“Yoast SEO” => “Search Appearances” => “Archives” – Set “Author Archives” to On)
  • The other was that I am running this blog from a subfolder. Which is something so common, that the plugin authors already have that in their FAQ on the plugin page. So I headed over to the server-config, made the mentioned tweaks, restarted the server and – voila – everything works!

Now I was able to find and follow @heiglandreas in my Mastodon-Client.

Things that I need to figure out now

The next things on my todo-list are:

  • That’s all nice and dandy on a personal blog. But how do I implement this so that people can actually follow a blog with changing authors – like for example 24daysindecember (did I mention that we are looking for people that want to contribute?)
  • Is there a possibility (or does it at all make sense) to somehow integrate that into my default fediverse-account? Or to get my personal account over to the andreas.heigl.org domain? Or to setup an @andreas@heigl.org fediverse account that also contains the stuff from the blog…

But those are questions that I will possibly answer in a

Increase code coverage successively

I often come across legacy projects that have a very low code coverage (or none at all). Getting such a project up to a high code coverage can be very frustrating as you will have a poor code coverage for a very long time.

So instead of generating an overall code coverage report with every pull request I tend to create a so called patch coverage report that checks how much of the patch is actually covered by tests.

Having something like that in place also allows me to force contributors to include tests for their newly contributed code. Which in turn successively improves the overall code coverage up to a level where I might be able to go for that instead of the patch coverage.

But how to implement that?

That’s not as complicated as it sounds. As Sebastian Bergmann already wrote a tool for that.

Enter phpcov

Using phpcov requires us to

  • first generate a diff against the last code-revision,
  • then generate a coverage-report via phpunit --coverage-php and
  • then run phpcov against those artefacts.

So it’s as complicated as

$ git diff HEAD^1 > /tmp/patch.txt
$ ./tools/phpunit --coverage-php /tmp/coverage.cov
$ ./tools/phpcov patch-coverage --path-prefix /path/to/project /tmp/coverage.cov /tmp/patch.txt

That’s it.

It will return a non-zero value when not all lines are covered and it will tell you which lines aren’t covered.

So add that to your automation to have it executed at whatever stage you like (I recommend in the CI-pipeline of your Pull-/Merge-Request and let that fail whenever the return code is non-zero)

Github Action

If you want to see a way to implement that in GitHub Actions, check out this gist.