Australian Law Enforcement Doesn't Deserve Access To More Data

Image: Getty Images

While the Federal government continues to pursue its agenda of trying to convince tech companies to not give them a backdoor but provide access to encrypted communications - however that's meant to work - it's worth thinking about how law enforcement uses data today and why access to more data may not be the answer.

The two most spoken about domestic terrorism incidents of recent times are the Lindt Cafe siege in Sydney and the recent attack on Bourke Street in Melbourne. In both cases, it seems to me, that the police had lots of information about the potential risks but lacked the cohesion to put the pieces together.

If we look at the more recent case on Bourke Street, the perpetrator, Hassan Khalid Shire Ali, had recently been put on bail, was on terrorist watch lists and had a cancelled passport because of links to terrorist organisations. He had also reported hallucinating. In other words, there were plenty of signs that the offender was likely to be planning some sort of criminal act.

I can't see that there was a lack of data in this case.

The case of Man Haron Monis was not dissimilar. He had made significant threats before, against Qantas, he had sent abusive letters to the families of soldiers killed in Afghanistan, had been, but was removed, from watchlists and had a history of violent crime.

Again, there wasn't a lack of data. What the police forces lacked was an integrated model, across federal and state police data.

In both cases the police the police knew they were dealing with someone who had made threats, with recent, escalating criminal history, were exhibiting unstable behaviour and had past associations with known terrorism organisations.

It's time for the law enforcement community to admit that the current system is broken. Surely, when someone on a Federal watchlist is placed on bail by a state-based police force and the courts, an alarm bell should ring somewhere? And while it might not lead to a conviction, perhaps a visit from the local police letting the person know that they are under surveillance as a result of their recent actions and being on a watchlist might discourage them from committing a crime.

The federal government is making a big deal about wanting access to encrypted data. And tech companies, rightly, are railing against the move. Any move that weakens security for everyone is retrograde - something Blackberry (or, more accurately RIM, the original company name) learned the hard way.

Although Blackberry was facing massive challenges from the rising tide of iOS and Android devices, and a massive patent lawsuit that cost them over half a billion dollars, the company's management decided to share their encryption keys to the Canadian government.

At the time, all Blackberry messaging traffic across the world was funnelled through those servers. Suddenly, trust in the company was irrevocably damaged. And a company facing major challenges was dealt a substantial blow.

You can be that there isn't a tech company on the planet that wants to weaken their systems at the behest of a government.

The government and a weak opposition supported the introduction of metadata retention legislation. And now they are pursuing the weakening of protections that are relied on by businesses and individuals around the world. And, the reality is, bad guys will use encrypted communications systems regardless of the laws made by parliament. The tools to develop such applications are widely available and implementation is getting easier all the time.

And, in any case, there's limited evidence to suggest that law enforcement could effectively use such data to prevent a crime. Which should be the goal - to protect people from a crime occurring and not pick up the pieces after.


Comments

    It is illegal to open other peoples letter box mail without a court order. So why do officials seem to think it is okay if the mail is electronic ?

    On the other hand we all gave privacy away when we opened our Google account.

    Privacy - It's all Over Red Rover

    Google and facebook know more about you than the feds do...and they’re using that data for targeted advertising not in the apprehension of suspects.

    Every time they try to link the sacrifice of privacy protections to terrorism, it's worth reminding:

    This authoritarian bullshit is NOT about terrorism.

    We were promised that there would be no scope-creep in the use of our mandatorily-retained metadata, and now we have fucking city councils browsing it without warrant to chase fines. We have cops giving their domestic-abuser mates to track down their fleeing exes. We have journalists and whistleblowers being hunted for daring to expose corruption and greivous incompetence.

    THAT'S what privacy-violating laws are for - not terrorism. They're for obliterating civil rights.

      Terrorism is a useful tool for governments to use to frighten citizens into giving up their rights to privacy. In actual fact, there is so little harm caused by terrorism compared to every day risks like driving, crossing the road, exposure to environmental hazards, poor quality food, etc that hurt or kill many many more people.

      Governments want to control, they don't want opposition, and they don't want to tell the truth. Power has always corrupted, and democracies are easier to corrupt when the citizens are being watched and controlled.

    Interesting viewpoint.
    Today, I will be playing the part of the Devils Advocate.
    " the local police letting the person know that they are under surveillance as a result of their recent actions and being on a watchlist might discourage them from committing a crime."
    Given a lot of the people on this bracket are deemed mentally unstable anyway, this will just agitate them even further, with counter-productive results.
    Those who aren't unstable and are up to no good, will most likely assume (or will have been trained to expect) they are being monitored anyway, and take precautions.
    The police already make their presence known in a not so subtle way, if you are a person of interest - just ask any former prisoner convicted for a violent crime that has just been released from prison.
    They will tell you tales of police coincidentally turning up at work, social events, at the pub, or just innocuously sitting outside the house in a squad car.

    "At the time, all Blackberry messaging traffic across the world was funnelled through those servers. Suddenly, trust in the company was irrevocably damaged. And a company facing major challenges was dealt a substantial blow."
    At the time, major drug cartels were organising global trade routes and allegiances with affiliated gangs using Blackberries, due to the centralised server model and enhanced encryption.
    IIRC, Mexican cartels were lining up distribution channels with both the Russians and the Mongrel Mob here in Australia, simply by sending them pre-configured BB handsets.
    Moral aspect aside, it is a known correlation that an increase in drug usage results in an increase in crime, ranging from low level burglaries to rival bike gang shoot outs in public bars. In that scenario, encryption does nothing to help bystanders lying in a pool of their own blood.

    Furthermore, Blackberry was put in a no win position. The media had already served up juicy stories on how BB encryption was the single reason why cops couldn't catch terrorists.

    If BB didn't comply, it would be continually and publicly harangued as being a haven for terrorists, paedophiles and criminals. Shareholders would have revolted, and BB would have died on it's own sword for nothing more than principle.
    If it did comply, everyone screams 'sellout' and charges off to the nearest blog to post vitriolic comments.
    Privacy minded organisations looked elsewhere, or simply added an additional layer of protection, and continued using the service. As it was, heads of state (US, Germany, etc) were still using the handsets until just a couple of years ago.
    The major departure would have been from the criminal organisations, and I'm pretty sure BB were happy to live with that decision.

    "You can be(t) that there isn't a tech company on the planet that wants to weaken their systems at the behest of a government"
    They may not want to, but they will.
    When Vodafone started in Australia, they introduced the weakest form of GSM encryption possible (A5) in Australia, because the Govt mandated it.
    If VF have had to comply, then you can bet a month of pizza that the other telcos have also been given the hard word.
    It's a short argument: 'Want to do business here ? Then follow my rules'

    "And, in any case, there's limited evidence to suggest that law enforcement could effectively use such data to prevent a crime"
    But if they were utilising it effectively, it's pretty safe to assume they are not going to blab about it, because criminals read the papers too.
    As my Grandmother used to say "Absence of life does not mean the presence of death", and a similar correlation can be drawn here.
    Simply because we do not have a consistent pattern model to point at, does not mean it is not being used effectively.
    Too many successful operations will make the bad guys change tactics and hardware, and the security services have to start over.

    The point you make about threat assessment is a valid one, but it's also a poisoned chalice.
    How does an analyst determine if someone is a credible threat or just simply deranged ?

    With a mental health service that is already stretched to breaking point with just garden variety psychosis and bare cupboard funding, they have no capability to spot check aberrations, particularly those with criminal/lethal intent.
    The police (rightly so) simply offloads anyone determined to be unstable off to the mental health assessment teams, and thus lack the expertise, sensitivity and subtlety required.

    Whilst data can be modelled to give predictive behaviour, that requires a full set of data, and one that we, as a society object to providing, hence our privacy laws.

    We are delighted/disturbed that Google can serve up content that caters to our tastes, based on the unfettered data we provide to it, so evidently we are not too bothered about our information being used for analysis.
    Where we get agitated is when that data is perceived to be used against us.
    That in turn, speaks volumes about our trust relationship with the gatekeepers of this information.
    Evidently, we trust AI more than we trust our fellow human beings, and that may be the start on how we have to approach this complex problem.

    Perhaps JC Denton was onto something after all..

      That is a great response - and certainly one that has me thinking further. Thanks for taking the time to consider this so deeply. Your point on trusting AI is not lost on me - it's something I've been looking into wit a number of people over recent months.

      AI is definitely the only real solution to this issue, hopefully before I slip off my perch it will have matured enough for us to trust it with our online identities, after all, we are already tantalisingly close to letting do our driving for us. An AI managed Government is what we should be aiming for imo.

        BTW I see we are still in the ridiculous situation where a couple of people taking umbrage at your comment, dumps you directly into comment hell on your next post. How long before Lifehacker goes the same direction as Gizmodo because of backlash due to this silly system.

Join the discussion!

Trending Stories Right Now