Frederick is somone you don't wanna mess with.
A former student at the University of Westminster now running his own business, he has an eye for the back end coder wtih an radar for the new new thing.
So when I got skyping with him about SEO and rich key word linking, Frederick already had some ideas to give an article more google juice.
More on that on Thursday when I'll record a brief interview with him.
He's already built an application that facilitates symmetrical up-download from your site or outernet.
But to that question.
How many journalists check the google ranking via a range of analytics in the morning?
And how soon before we reach more defined ground where writing for human consumption is still the end goal, but you have to satisfy the google beast or bots first on the way.
Looking at in and outbound links, as well as key rich word density.
Data mining journalism
We truly are heading for an automative journalism society.
And the future reall will have more and more clever clogs writing articles and avoidng filtering software to reach higher penetrations in the same way spammers rewrite code all the while to beat filters.
This morning in a bid to monitor bots entering Viewmag I found myself writing a new robot.txt file.
Then we delved into myphp database.
Frederick touched on a subject already made popular by Adrian Holovaty, but not used anywhere near as much- public data.
The amount of data available by government bodies etc is there to be mined by journalists, according to Frederick.
More on than and more on Thursday