Multidimensional action options in NET-TOOLS

While working on the NET-TOOLS behaviour on content detection, I realized that I wasn’t happy with just one option per site. Let me explain.

The current netToolsApiElementRulesMock allows one specific action when registered content has been discovered. For the below example, every and each block has an action called “notify“.

     "": {
         "comments": {},
         "posts": {
             "element": ".userContentWrapper",
             "parents": 1,
             "action": "notify",
             "attachto": "userContentWrapper"

The plugin rolls through a bunch of categories and (currently) site names. When triggered, the content will only get flagged. In this case (as long as it works) a post will say something like “hey, this shared URL is based on THIS”. So if we trigger on a satire site, we basically say “This is satire. Beware”.

I was happy with this behaviour until I realized that different categories has different trigger levels. I’d say “fascism by choice” is not the correct term for this sidepatch. It’s all about customization.

For example, if I normally want stuff to be flagged with a notification, this will also occur on data that comes from the category “rightWing“. That’s not good enough! So from now on, the json object will be handled multidimensionally, where the above json-block contains the default behaviour for a specific site. If I for some reason need to change this behaviour, I can do it either on site level or category level. Let me show this too, below.

One thing to note is that same rules for the actions also applies on description/descriptions.

     "rightWing": {
         "description": "Right wing politics.",
         "names": {
             "": "Nyheter Idag",
             "": "Fria Tider",
             "": "Samhallsnytt"
         "action": "replace",
         "actions": {
             "": "remove",
             "nyheteridag": "notify"
     "satire": {
         "description": "Fake news and satire",
         "names": {
             "storkensnyheter": "Storkens Nyheter (obsolete)"
         "descriptions": {
             "storkensnyheter": "Content on this site was considered fake news and made people angry."

The above example has a default action set to replace. While the site itself (facebook) has a setting that tells the plugin to notify the user on normal triggers, this default action will be attached to the rightWing category. In this case, if we trigger on storkensnyheter, the plugin will keep notifying me about “fakenews”. But if friatider is triggered, the detected element will be replaced completely with a notification box that the content was there before but no longer is.

However, we have more special rules under the actions object; I can live with shared content from nyheteridag, so if we happens to trigger on that site, the plugin will fall back to a notification. If we for some reason will trigger on the samnytt-link, that element will not show up at all, not even with a notification.

See below to see the plugin in effect!
Note: The screendumps below does not match the configuration above.

Posted in Uncategorized | Leave a comment

New project page for Network Tools

A new project documentation has been established for the ongoing project at

The current release has gong through very basic testings with Facebook as ground base. However, it is time to move forward. Next step in the codebase is to make a configurable interface, categorized in a user friendly setup, so we can move further beyond the “one platform only”-world.

Basically, this is a completely API-less release, so the first setup will be built on JSON objects, which will be shareable. The first experimental json block can look like below and will be the output from a future API request too. The content will be closely described on the docpages (link above).

     "rightWing": {
         "description": "Right wing politics.",
         "names": {
             "": "Nyheter Idag",
             "": "Fria Tider",
             "": "Samhallsnytt"
     "leftWing": {
         "description": "Left wing politics",
         "names": {}
     "regularMedia": {
         "description": "What we define as independent media.",
         "names": {
             "": "Expressen",
             "": "Aftonbladet",
             "": "Dagens Nyheter",
             "": "Sveriges Television"
         "action": "replace"

The outcoming effect of the current solution looks like this:

Posted in Uncategorized | Leave a comment

How to keep the Giraffe motivated

The hardest thing currently known to me is to keep up the motivation in a universe where time is not always enough. However, the project actually runs forward. The first outcome of a non-adopted codebase (nope, I did not adopt old code this time) can be seen below.

The words that is used in this version are censored due to “word trigger sensitivity”. By means, they are probably a trigger for some people. Probably some right wingers.

There’s no API ready for sharing and saving data for blocking. But I need to figure out some more things before anyway. One thing is how configurable the extension should be. Since this plugin is planned to be site-independent, the above Facebook-example is only the first step. Besides, I have some kind of idea to make simple json-imports, just so it could be completely API-less too. Or some kind of “I’ll post my json data here in this forum, feel free to use my filtering rules”. That could probably give a feeling of decentralization. By means, there should be no API that could be shut down or ddosed by angry users.

Reading elements

DOMSubtreeModified is deprecated, so the extension is primarily running with the MutationObserver. There’s however a failover setting in the configuration that allows us to use the DOMSubtree instead. DOMSubtree was the prior method to make sure elements are always analyzed, even after the window.load-segment. There’s always ajaxes that should probably be included in scans, as long as they are making visual changes in the browser.

Making it happen

Currently, this script loops through a pre-defined wordlist. For each element found on the site, the plugin checks if there are any sub-elements within the primaries scanned – which comes from either DOMSubtreeModified or a MutationObserver – that contains URL elements. URL elements are, if found, scanned for the badwords listed in the sample variable.

What’s next?

The next step in this script would probably to make the scanning level configurable too. For example, the current version is depending on that – after a found URL – there are a parent element with the class userContentWrapper assigned. When we trigger on this, we choose to replace the element with a text, instead of removing it. This part should however be configurable by users, probably with something like this:

  • Keep scanning elements on every site this plugin is active on.
  • Let user configure which element to look for, if it contains a .class or a #id.
  • When the .class or #id is found, X levels back, decide what to do (replace or remove the child) and from what level it should happen.

The current examples and snippets

Each element on Facebook are considered a kind of “card” element. By means, the card is the user post container. Removing the whole card will also remove everything linked to the post without leaving traces from borders, etc. From there, it can also be replaced with text or information.

Using userContentWrapper (Facebook) this is doable. The discovered “card node” should jump back to its parent and work from there (this is currently fixed with jQuery). Below, there’s an example of such cards. Facebook initialization always start with those, emptied.

We should however not stop there. I need to check if it’s possible to acutally remove the LINK element only, so that post data will stay there while the traces to the link will be removed. Also, currently posts are removed even when there are comments with “bad links”. This has to be limited. That is however a completely different chapter and should be configured at a user defined level. Why? To make them responsible for their own actions probably.

Posted in Uncategorized | Leave a comment

NetTools (Giraffe) Embryo in progress

What currently contains based on the current not-yet-commited codebase.

Embryo issue number NT-99:

This is not yet another adblocker. It’s a yet another “getting-rid-of-bullshit-for-real-extension”.

This is a project, basically written for personal use. Not your personal use. My personal use. However, as I need this extension to work on “all instances” (or actually, if I switch computer, I want this to follow me wherever I go), this extension HAS to be shared publicly. Unfortunately, mobile phones is not included in “all instances”. At least chrome is in that kind of environment isolated from everything called “extensions”.

Example: If many people warns me about a website, link, game, shitstorm, or whatever that comes in your mind – that also MAY BE satire, fake news or bullshit, I want to be able to flag that data or link as inappropriate (or ‘this is racist bullshit’). As an alternative my browser should be able to completely remove it (the elements) so I don’t have to see it anymore.

Since the precense of those “bullshit elements” is escalating, and has been the past years from 2019, I decided to build this plugin, mainly for Chrome, and push it out to chrome webstore instead of make it private. The major reason for this is the “chrome switching”. Making it a private extension means you have to download it to each browser that should use it.

So, what is the status of this project?

This evening, the interface has been rewritten to handle configurable elements by the little tiny icon on top, near the browser address-bar. The reason? There is an overlay that can be clicked on on top of every page that sometimes MAY be annoying to always see there. So to make that tiny layer to disappear but work in background, there’s now a checkbox available to make it disappear.

There are also (which I probably will be burnt for) example data based on three known fascist sites. Which they are? Well, the can be seen if you know how to check git-commits.

Chrome storage sync is ready for syncing data. However, it’s untested since there’s still only local sources available.

Posted in Uncategorized | Leave a comment

Basic ideas of the APIv4 and the Giraffe Project

First of all. This post is automatically posted. It is not part of the hashtag #avskedsbrev, however, I have to honour the hashtag by using it myself. Besides, I may not be able to return with more information about the projects. Second: The posts I’m making about APIv4 is actually not part of Giraffe Project.

You should consider the API as an engine for whatever you want to build (and I haven’t found anything that offers a complete API-solution to start building against). I actually presume that the codebase I’m starting with isn’t what other developers expect from an API. I actually guess that most of them suggests something like Laravel or similar complete language. However I do not tend to big something big either. The first working API (v3.0) works with WordPress as a base and mostly tries to use user data approved from WordPress. Since I’m still lazy, writing this, I hope to not make all work by myself. I borrow from WordPress.

If it is a good idea? Probably not, since I had to build TorneAUTH.

But how the API looks, what it is, etc, is not what this post was about at all actually. This is only a simple disclaimer, that the Giraffe Project is an entirely different project. That may borrow the API for data transfering.

Posted in Uncategorized | Leave a comment

APIv4 Opens

Tornevall Networks has been saying there’s not enough time to finish off some bigger projects, only stored in mind, fast enough. However, due to the escalating situation I’ve been trying harder to get the time necessary for building even if it is sometimes nearly impossible. Sometimes it’s very much about the current mood that blocks the way forward.

But my ideas still lives. I see very much to my own needs of cleaner webspace to live in. As I explore the internet, I realize that sometimes I need some kind of rest from bigotry, racism and hatred. I can get this by building something that makes it possible to choose the content I want to see – even if the visiting platform not always allows me to do this. And for the concerns, I think it is for a good cause. Building this correctly, I think there’s a slight chance of surviving madness.

I’ve been planning this ever since I visited the fashion-ish blogger “Kissie” in a time when she was still young and her primary goals was to manipulate posts and comments, even if the target today is something even bigger. At the time mentioned, the targets were quite timy.

I’ve seen similar products being developed for Chrome, where politics can be easily filtered away – but mostly on Facebook. I’ve seen Trump filters, and so on in a long row of filtering software. But yet, noone seems to have been thinking bigger than this. There’s also other platforms that actually should take care of their content better. Probably this is prevented by the fact that this will need plenty of capacity – eventually.

But by reading about Facebook as a platform through Roger McNamees eyes – where democracy is at risk of underminig – I also realize that we can, and probably should, do a lot more. But by building something – even if it is for myself – that could be made publicly open, there is always a risk of abuse of the product. Democracy are always on a high stake for data maniplation and fake news. This is what I’m taking with me into this project, even if I’m aware of that this product is primarily built for my own sake of relaxation.

But the most important thing – as I believe this might go totally wrong, I’m thinking of opening up the source of the API base I’m imagining this could be built on. One goal is to decentralize as much as possible; if this idea is failing due to whatever comes in my way, the project itself should be harder to stop, by making people build their own solutions over the API. And maybe even better, by making this public, there could be other ways of making it better.

The base is on version 4.0 as the prior versions probably lack of very much that is needed to day, and probably could be improved. After all, I’m a very old fashioned backward-compatible-thinking developer. By seeing PHP rush forward with deprecations and such, I think it’s time to rethink a lot here. The API base is in an embryo state, and by itself it does very little. The major idea is to link chrome-, firefox- and maybe many other plugins with communication against it. The APIv4 base will hopefully be the primary engine of the filter itself.

The project tracker can be found below. This is what I hope will be the start of The Giraffe Project.


Source base: (checkout version 4).


Posted in Uncategorized | Leave a comment

The giraffe project

During the last parts of 2019, many people on the interwebz suffered to right wing ideas, like how to shut down free speech which they also demands from the public society. Those pages, including the corrupcy discovered through Roger McNamee (amongst others) in social medias like /the-forbidden-F-word-of-community/ and Twitter.

Holocaust giraffe

Everything started with a giraffe. A giraffe that was showing the ideas of fascism and how the users of fascism demounts democracy, step by step. At the end of the giraffes head, there was a last “hello” (in german) – before the holocaust itself can be initiated. This image was shared by the community (from the prior post), which first was marked as forbidden and my account was shut down for three days. An apology received me from Facebook telling me that the rules was not broken. However, my ban remained. Besides of this, I had a longer ban put into another sections of /the-forbidden-F-word-of-community/ that told me that “I’ve been warned and shut down before” and therefore my ban was extended there (which means I could not live stream until november 2019) – despite the fact that the rules, the last time, was not broken. All of this together, including a bunch of jackass nazis, made me take an important decision (despite my laziness) to initiate a warfare dashboard.

– November 11, 2019 (Revision 2)

Posted in Uncategorized | Leave a comment

I’ve had enough of this shit!

… or when Facebook bans you with an apology.

Yes. I’ve really had enough of it. It all starts with the image below. Facebook-filters that apparently is based on image analytics and probably also OCR-reading them (as the giraffe is actually heiling, this image probably triggered some kind of red alert in the Facebook system). However, the original poster AND I, myself, asked about a review of it as it seemed to broke against something. Just a few hours later, I got a message from Facebook that said they were sorry about the happening. The image was allowed to post. But from that point, nothing went as I presumed it should…

In the connection to the below image, a ban followed. For three days. Despite the fact that Facebook sent me an apology, they never lifted the ban. This generated some kind of very fucked up weird moment 22. I was banned, without doing anything wrong. Of course, my reaction was anything else than happiness.

The biggest problem I see here, is that Facebook has a platform so big that they have quite a lot of power, but in the same time you can not claim any rights to use it. By means, somewhere here – in the middle of all this crap – I can also see what Roger McNamee once said about their power and the fact that they are highly undermining democracy, with their acts.

So from this point, things will happen, only based on the way how I handle my anger management…

Posted in Uncategorized | Leave a comment

NetTools – What if – Performance issues

It is a little bit too late for questions like this, but this morning I woke up with it. “What about performance?”

“Imagine how a larger audience will have effect on a plugin like this. Each time you, and hundreds of other people browse themselves into websites and the extension should start analyze the website.”

Well. This is not a real problem as long as you store data “locally” (or via storage, to have the data shared between browsers). In that specific case, the storage will give the user immediately stored data for a given site. If we store data based on the current visited domain both time and network performance will be saved. It is when we’re trying to synchronize visits against APIs the real problems begin. Especially if each element contains different kinds of urls that has to be sent away for analysis. In cases like Facebook there may be a lot of data transferred out, as the extension won’t know what the page contains initially.

If we’d like to blacklist a linked url, we’re not only limited to Facebook. Blacklisted URLs must be reachable from each site that goes through analysis. It probably won’t get beautiful if the extension grows larger. One solution is to just send the domain name out (hashed?), but with a large amount of traffic this may, this could still be an issue.

The idea it self could look like this:

Fetch all elements when document loaded.

All hostnames will be hashed – so looks like this …


… where only is the part of the hash. Rendering all hrefs with domains only will push back the amount of data being sent. If many of the links on a website is pointing to the same place – or even better if all do – there will be only one hash to send initially.

Waiting for the document to be loaded comes with a price though: Facebook dynamically loads their pages, so when scrolling downwards, there will never be a finished document.

Fetch elements with DOMSubtreeModified.

The first versions of the chrome extensions handled this very well. Since the data was stored locally, elements fetched and analyzed was instant. There was only a few short words (nyatider, friatider, etc) to look for. But sending data out online in this state will also be instant; they wont render in bulk, so the datastream will take longer. With lots of usage this might of course be a problem. Having hosts hashed is a good way to do this, however we can’t avoid the fact with the datastream. The bulk will be smaller, but the data stream will still be there.

The next problem to handle here is location. Imagine usage from Japan, when/if all analyzing servers are located in a different part of the world. There will be delays. And still, downtime is not even considered here…

Posted in Uncategorized | Leave a comment

Network Tools Embryo v3 in the shape of 2.1.0

For a few hours I booted up something that is supposed to be the new version of the extremely old project “Content Filter for Facebook”. However, the name has been changed at least two times (the name I just mentioned was the first one, which almost got banned due to a trademark issue; I wasn’t allowed to use Facebook in the name). When it first got renamed after the first trademark issue, I gave it the name “NETFILTER”, since my plans was to make – specifically – an extension for Chrome that filtered content. The platform was Facebook.

Just very recently I realized that targeting only one platform was more about thinking inside the box rather than outside I changed my mind. I started a project (in JIRA) with the plan to make it bigger. The target was already more than just “stupid tests” those days. And for a few hours ago, one of the more important reasons for this project became fascism and fake news.

Lazy as I am, I kept this in my mind for a long time. “I have no time right now”. And I actually don’t. And most of the time I can’t motivate myself to go further. Today, I was trying to fool myself. Many of my projects are initiated with the question “Is it even possible to do this?“. Saying this and convincing myself that the project is “just an experiment” is often what initiates something bigger. And today, it happened. I created, actually due to the disovery of some quite scary fascist videos on Youtube, the first embryo for “Network Tools 3.0”. The current version is an alpha release, so I borrowed the old “Content filter”-source and named it 2.1.0.

So, how do it look? Well. It’s a quite simple base, since I want to start easy on this. The reason; there’s not much prepared yet. There is no scraping tools live, and netcurl is still in development. So the easiest way to boot this project is to make most of the variables local. Adding for example blacklisted sites to the extension will be done via Google-API’s. So there’s no sharing going on yet. This is also done for “other secure reasons”.

As you can see above, the embryo is a tiny overlay box in the upper left corner. It’s currently clickable. When clicked, a larger box will be opened. Actually, it’s the same overlay box that expands to a full screen. In this mode, which will be upgraded in short, we’re supposed to do something. In the example, the extension asks the user if he/she wants to blacklist the page. This was however only a test, to see how we later on should attack the problem. The purpose here, is to see how far we can go, compared to the first prior obsolete releases where it rather opens up a menu context when right clicking the mouse. Blacklisting elements that was was OK, but it won’t cover the new purpose as elements and clicking was quite limited. Instead I want to give me – and eventually others – to give more options.

If we for example don’t want to blacklist the host, we might want to flag the site as trusted, untrusted, etc. Maybe we don’t want to do this with the entire host. Maybe we just want to target a static uri within the site. The target may also be a specific URL withing the page in a specific element, and so on. This was nearly impossible to make nice with menu contexts as the options were too many.

So, this is basically the start. Nothing happens on the click, and the overlay should actually be completely invisible to not interfere with anything at the site. This extension should be completely silent until asked to work with something.

Source code base.

Posted in Uncategorized | Leave a comment