This project can be found at https://netcurl.org
In the beginning there was curl. And curl was the driver of everything. A sloppy written library was written to quickly fetch data from websites. Websites that normally published proxies. The proxies were stored, scanned and tested and those that still answered and actually forwarded traffic through them, was blacklisted. It supported, based on what curl did, socks-proxy scans.
The Ecommerce spheree era
A few year later or so, there was ecommerce. There was no library that could do “everything”. You either needed to configure everything yourself or more libraries since the APIs that was used had several entries where SOAP was one. The idea of implementing this library was born.
However, the code was still quite sloppy written so it got cleaned up and a project in bitbucket (link above) was created as an open source project. Suggestions of not using this wrapper was retrieved from different directions, and I explained why other wrappers was not a good idea. For example, GuzzleHttp was one of the examples. The problem with “the others” was that it had to get fully configured and manually set up before it could be used. Our need was different. We needed something that required only a few rows of lines to get started.
NetCurl expanded to automatically detect available drivers. Curl was the primary. Soap was secondary. Guzzle gave me an idea to extend the support to catch up Guzzle and WordPress if they were available, as they – in difference to NetCurl – also supported streams which was the default communications engine for PHP. So detection of this was built in.
As of today, NetCurl has developed to be a quick-configurable library that calls for http-sites and parses them to usable objects or arrays. Netcurl activates whatever it need to fetch on high verbose level. It utilizes http codes to make throwable calls and extracts body data if necessary. However, the code was initially not written to be PSR compliant. The target of this code base right now, is to do so. One reason of this is that make the library less conflicting with PSR-compliant sites as the ecommerce base it is implemented in requries safer code. Also, the plans to build in more default communication engines (like before) so, regardless of what drivers a web service uses, communications should always be available and chosen by “best practice”.
The next and probably last step is to start implementing this in a professional API based that can – like FnargBlog’s “RSSWatch” did – fetch data automatically, store it and analyze it to be a part of the hunt of fake news, clickbait, changes in blogs, etc.