The long time netcurl 6.1 soon ready for release

A mailinglist for netcurl has been established under, available for subscriptions where information like this will be posted.

Stable releases for netcurl v6.0 will, for now on, no longer be pushed into the master repository. A new stable/6.0 has been created for maintenance releases.

Version 6.1 is quite close to get a new tag and there’s not many compatibility issues left to take care of. The only bigger part that awaits for 6.1 is the complementary network module.

There’s a few other components that rests in netcurl 6.1, that is under consideration right now if they should wait for completion or follow a first primary release. I’ll be back on this.

Posted in Uncategorized | Leave a comment

NetCurl is in active development

This old project that was once born as a proxy scraping tool is alive again. Well, in fact it has been alive and idle in several years as the purpose of it did a big road change when I started to enforce implementation of it in ecommerce platform. The project showed up to be a great combo-mixer of communication tools since it had great failover possibilites. However, time changes and it needs more than this now.

My wish is to reinstate a proxy scraper, as this project was written in the early years of PHP 5.3 – and that tells a lot of what it was and what it now can become instead. As you can see on the left side, you could possibly figure out that the support of failovers are growing. This of course takes time to implement but as I just wrote, I believe that this must be done.

But that is not everything. By making the client more compliant with reality it could also be a part of another projects – like the network tools, as those tools won’t be much to have if there’s no data scraping available. For example, there was the “fnarg project” (more known as a part of the giraffe-project today, for a smaller amount of people), which was specialized on RSS-fetching. The fetcher was built so it not only fetched new articles. It also kept track of old, and if they was changed/edited over time.

All of this have forced me into a state that I’ve refused to be in, for several years now. But realizing that PHP goes forward, and not much backwards – this must be done before it’s too late.

Welcome to version 6.1

Posted in Uncategorized | Leave a comment

Multidimensional action options in NET-TOOLS

While working on the NET-TOOLS behaviour on content detection, I realized that I wasn’t happy with just one option per site. Let me explain.

The current netToolsApiElementRulesMock allows one specific action when registered content has been discovered. For the below example, every and each block has an action called “notify“.

     "": {
         "comments": {},
         "posts": {
             "element": ".userContentWrapper",
             "parents": 1,
             "action": "notify",
             "attachto": "userContentWrapper"

The plugin rolls through a bunch of categories and (currently) site names. When triggered, the content will only get flagged. In this case (as long as it works) a post will say something like “hey, this shared URL is based on THIS”. So if we trigger on a satire site, we basically say “This is satire. Beware”.

I was happy with this behaviour until I realized that different categories has different trigger levels. I’d say “fascism by choice” is not the correct term for this sidepatch. It’s all about customization.

For example, if I normally want stuff to be flagged with a notification, this will also occur on data that comes from the category “rightWing“. That’s not good enough! So from now on, the json object will be handled multidimensionally, where the above json-block contains the default behaviour for a specific site. If I for some reason need to change this behaviour, I can do it either on site level or category level. Let me show this too, below.

One thing to note is that same rules for the actions also applies on description/descriptions.

     "rightWing": {
         "description": "Right wing politics.",
         "names": {
             "": "Nyheter Idag",
             "": "Fria Tider",
             "": "Samhallsnytt"
         "action": "replace",
         "actions": {
             "": "remove",
             "nyheteridag": "notify"
     "satire": {
         "description": "Fake news and satire",
         "names": {
             "storkensnyheter": "Storkens Nyheter (obsolete)"
         "descriptions": {
             "storkensnyheter": "Content on this site was considered fake news and made people angry."

The above example has a default action set to replace. While the site itself (facebook) has a setting that tells the plugin to notify the user on normal triggers, this default action will be attached to the rightWing category. In this case, if we trigger on storkensnyheter, the plugin will keep notifying me about “fakenews”. But if friatider is triggered, the detected element will be replaced completely with a notification box that the content was there before but no longer is.

However, we have more special rules under the actions object; I can live with shared content from nyheteridag, so if we happens to trigger on that site, the plugin will fall back to a notification. If we for some reason will trigger on the samnytt-link, that element will not show up at all, not even with a notification.

See below to see the plugin in effect!
Note: The screendumps below does not match the configuration above.

Posted in Uncategorized | Leave a comment

New project page for Network Tools

A new project documentation has been established for the ongoing project at

The current release has gong through very basic testings with Facebook as ground base. However, it is time to move forward. Next step in the codebase is to make a configurable interface, categorized in a user friendly setup, so we can move further beyond the “one platform only”-world.

Basically, this is a completely API-less release, so the first setup will be built on JSON objects, which will be shareable. The first experimental json block can look like below and will be the output from a future API request too. The content will be closely described on the docpages (link above).

     "rightWing": {
         "description": "Right wing politics.",
         "names": {
             "": "Nyheter Idag",
             "": "Fria Tider",
             "": "Samhallsnytt"
     "leftWing": {
         "description": "Left wing politics",
         "names": {}
     "regularMedia": {
         "description": "What we define as independent media.",
         "names": {
             "": "Expressen",
             "": "Aftonbladet",
             "": "Dagens Nyheter",
             "": "Sveriges Television"
         "action": "replace"

The outcoming effect of the current solution looks like this:

Posted in Uncategorized | Leave a comment

How to keep the Giraffe motivated

The hardest thing currently known to me is to keep up the motivation in a universe where time is not always enough. However, the project actually runs forward. The first outcome of a non-adopted codebase (nope, I did not adopt old code this time) can be seen below.

The words that is used in this version are censored due to “word trigger sensitivity”. By means, they are probably a trigger for some people. Probably some right wingers.

There’s no API ready for sharing and saving data for blocking. But I need to figure out some more things before anyway. One thing is how configurable the extension should be. Since this plugin is planned to be site-independent, the above Facebook-example is only the first step. Besides, I have some kind of idea to make simple json-imports, just so it could be completely API-less too. Or some kind of “I’ll post my json data here in this forum, feel free to use my filtering rules”. That could probably give a feeling of decentralization. By means, there should be no API that could be shut down or ddosed by angry users.

Reading elements

DOMSubtreeModified is deprecated, so the extension is primarily running with the MutationObserver. There’s however a failover setting in the configuration that allows us to use the DOMSubtree instead. DOMSubtree was the prior method to make sure elements are always analyzed, even after the window.load-segment. There’s always ajaxes that should probably be included in scans, as long as they are making visual changes in the browser.

Making it happen

Currently, this script loops through a pre-defined wordlist. For each element found on the site, the plugin checks if there are any sub-elements within the primaries scanned – which comes from either DOMSubtreeModified or a MutationObserver – that contains URL elements. URL elements are, if found, scanned for the badwords listed in the sample variable.

What’s next?

The next step in this script would probably to make the scanning level configurable too. For example, the current version is depending on that – after a found URL – there are a parent element with the class userContentWrapper assigned. When we trigger on this, we choose to replace the element with a text, instead of removing it. This part should however be configurable by users, probably with something like this:

  • Keep scanning elements on every site this plugin is active on.
  • Let user configure which element to look for, if it contains a .class or a #id.
  • When the .class or #id is found, X levels back, decide what to do (replace or remove the child) and from what level it should happen.

The current examples and snippets

Each element on Facebook are considered a kind of “card” element. By means, the card is the user post container. Removing the whole card will also remove everything linked to the post without leaving traces from borders, etc. From there, it can also be replaced with text or information.

Using userContentWrapper (Facebook) this is doable. The discovered “card node” should jump back to its parent and work from there (this is currently fixed with jQuery). Below, there’s an example of such cards. Facebook initialization always start with those, emptied.

We should however not stop there. I need to check if it’s possible to acutally remove the LINK element only, so that post data will stay there while the traces to the link will be removed. Also, currently posts are removed even when there are comments with “bad links”. This has to be limited. That is however a completely different chapter and should be configured at a user defined level. Why? To make them responsible for their own actions probably.

Posted in Uncategorized | Leave a comment

NetTools (Giraffe) Embryo in progress

What currently contains based on the current not-yet-commited codebase.

Embryo issue number NT-99:

This is not yet another adblocker. It’s a yet another “getting-rid-of-bullshit-for-real-extension”.

This is a project, basically written for personal use. Not your personal use. My personal use. However, as I need this extension to work on “all instances” (or actually, if I switch computer, I want this to follow me wherever I go), this extension HAS to be shared publicly. Unfortunately, mobile phones is not included in “all instances”. At least chrome is in that kind of environment isolated from everything called “extensions”.

Example: If many people warns me about a website, link, game, shitstorm, or whatever that comes in your mind – that also MAY BE satire, fake news or bullshit, I want to be able to flag that data or link as inappropriate (or ‘this is racist bullshit’). As an alternative my browser should be able to completely remove it (the elements) so I don’t have to see it anymore.

Since the precense of those “bullshit elements” is escalating, and has been the past years from 2019, I decided to build this plugin, mainly for Chrome, and push it out to chrome webstore instead of make it private. The major reason for this is the “chrome switching”. Making it a private extension means you have to download it to each browser that should use it.

So, what is the status of this project?

This evening, the interface has been rewritten to handle configurable elements by the little tiny icon on top, near the browser address-bar. The reason? There is an overlay that can be clicked on on top of every page that sometimes MAY be annoying to always see there. So to make that tiny layer to disappear but work in background, there’s now a checkbox available to make it disappear.

There are also (which I probably will be burnt for) example data based on three known fascist sites. Which they are? Well, the can be seen if you know how to check git-commits.

Chrome storage sync is ready for syncing data. However, it’s untested since there’s still only local sources available.

Posted in Uncategorized | Leave a comment

Basic ideas of the APIv4 and the Giraffe Project

First of all. This post is automatically posted. It is not part of the hashtag #avskedsbrev, however, I have to honour the hashtag by using it myself. Besides, I may not be able to return with more information about the projects. Second: The posts I’m making about APIv4 is actually not part of Giraffe Project.

You should consider the API as an engine for whatever you want to build (and I haven’t found anything that offers a complete API-solution to start building against). I actually presume that the codebase I’m starting with isn’t what other developers expect from an API. I actually guess that most of them suggests something like Laravel or similar complete language. However I do not tend to big something big either. The first working API (v3.0) works with WordPress as a base and mostly tries to use user data approved from WordPress. Since I’m still lazy, writing this, I hope to not make all work by myself. I borrow from WordPress.

If it is a good idea? Probably not, since I had to build TorneAUTH.

But how the API looks, what it is, etc, is not what this post was about at all actually. This is only a simple disclaimer, that the Giraffe Project is an entirely different project. That may borrow the API for data transfering.

Posted in Uncategorized | Leave a comment

APIv4 Opens

Tornevall Networks has been saying there’s not enough time to finish off some bigger projects, only stored in mind, fast enough. However, due to the escalating situation I’ve been trying harder to get the time necessary for building even if it is sometimes nearly impossible. Sometimes it’s very much about the current mood that blocks the way forward.

But my ideas still lives. I see very much to my own needs of cleaner webspace to live in. As I explore the internet, I realize that sometimes I need some kind of rest from bigotry, racism and hatred. I can get this by building something that makes it possible to choose the content I want to see – even if the visiting platform not always allows me to do this. And for the concerns, I think it is for a good cause. Building this correctly, I think there’s a slight chance of surviving madness.

I’ve been planning this ever since I visited the fashion-ish blogger “Kissie” in a time when she was still young and her primary goals was to manipulate posts and comments, even if the target today is something even bigger. At the time mentioned, the targets were quite timy.

I’ve seen similar products being developed for Chrome, where politics can be easily filtered away – but mostly on Facebook. I’ve seen Trump filters, and so on in a long row of filtering software. But yet, noone seems to have been thinking bigger than this. There’s also other platforms that actually should take care of their content better. Probably this is prevented by the fact that this will need plenty of capacity – eventually.

But by reading about Facebook as a platform through Roger McNamees eyes – where democracy is at risk of underminig – I also realize that we can, and probably should, do a lot more. But by building something – even if it is for myself – that could be made publicly open, there is always a risk of abuse of the product. Democracy are always on a high stake for data maniplation and fake news. This is what I’m taking with me into this project, even if I’m aware of that this product is primarily built for my own sake of relaxation.

But the most important thing – as I believe this might go totally wrong, I’m thinking of opening up the source of the API base I’m imagining this could be built on. One goal is to decentralize as much as possible; if this idea is failing due to whatever comes in my way, the project itself should be harder to stop, by making people build their own solutions over the API. And maybe even better, by making this public, there could be other ways of making it better.

The base is on version 4.0 as the prior versions probably lack of very much that is needed to day, and probably could be improved. After all, I’m a very old fashioned backward-compatible-thinking developer. By seeing PHP rush forward with deprecations and such, I think it’s time to rethink a lot here. The API base is in an embryo state, and by itself it does very little. The major idea is to link chrome-, firefox- and maybe many other plugins with communication against it. The APIv4 base will hopefully be the primary engine of the filter itself.

The project tracker can be found below. This is what I hope will be the start of The Giraffe Project.


Source base: (checkout version 4).


Posted in Uncategorized | Leave a comment

The giraffe project

During the last parts of 2019, many people on the interwebz suffered to right wing ideas, like how to shut down free speech which they also demands from the public society. Those pages, including the corrupcy discovered through Roger McNamee (amongst others) in social medias like /the-forbidden-F-word-of-community/ and Twitter.

Holocaust giraffe

Everything started with a giraffe. A giraffe that was showing the ideas of fascism and how the users of fascism demounts democracy, step by step. At the end of the giraffes head, there was a last “hello” (in german) – before the holocaust itself can be initiated. This image was shared by the community (from the prior post), which first was marked as forbidden and my account was shut down for three days. An apology received me from Facebook telling me that the rules was not broken. However, my ban remained. Besides of this, I had a longer ban put into another sections of /the-forbidden-F-word-of-community/ that told me that “I’ve been warned and shut down before” and therefore my ban was extended there (which means I could not live stream until november 2019) – despite the fact that the rules, the last time, was not broken. All of this together, including a bunch of jackass nazis, made me take an important decision (despite my laziness) to initiate a warfare dashboard.

– November 11, 2019 (Revision 2)

Posted in Uncategorized | Leave a comment

I’ve had enough of this shit!

… or when Facebook bans you with an apology.

Yes. I’ve really had enough of it. It all starts with the image below. Facebook-filters that apparently is based on image analytics and probably also OCR-reading them (as the giraffe is actually heiling, this image probably triggered some kind of red alert in the Facebook system). However, the original poster AND I, myself, asked about a review of it as it seemed to broke against something. Just a few hours later, I got a message from Facebook that said they were sorry about the happening. The image was allowed to post. But from that point, nothing went as I presumed it should…

In the connection to the below image, a ban followed. For three days. Despite the fact that Facebook sent me an apology, they never lifted the ban. This generated some kind of very fucked up weird moment 22. I was banned, without doing anything wrong. Of course, my reaction was anything else than happiness.

The biggest problem I see here, is that Facebook has a platform so big that they have quite a lot of power, but in the same time you can not claim any rights to use it. By means, somewhere here – in the middle of all this crap – I can also see what Roger McNamee once said about their power and the fact that they are highly undermining democracy, with their acts.

So from this point, things will happen, only based on the way how I handle my anger management…

Posted in Uncategorized | Leave a comment