Originally posted by Patrick Differ
View Post
It irritates me that realistic expectations of what these technologies are capable of, is somehow being presented as technology averse. I work with AI for my day job and I am familiar with how it works.
I say ‘No’ for the simple reason is there is way too much money being made on the case for it being unsolved. Even if the actual perpetrator of these crimes has been named already amongst the very many who have been accused, there isn’t going to be a confession or a case built which will persuade the CPS to proceed with a charge. There will be no trial and without a trial, there will be no case which has been tested before a jury and proven beyond reasonable doubt. So, there will always be doubt and in our legal system the accused is innocent until proven guilty in a Court of Law. So, it can never be solved. Technology cannot change that.
Any reasonable doubt at all, will always be enough room for someone to put forward their new theory, like maybe the Tetley Tea Folk or the Mob did it (or perhaps the Tetley Tea Folk in cahoots with the Mob - and the Royal Family), and get a book deal and fawning Daily Mail coverage. Whilst financial rewards for providing a new names and theories remains high, then new suspects will continue to be a production line of spurious reasoning.
You seem to place objective data on some kind of pedestal which makes it somehow incontrovertibly reliable but that’s not how data works at all. Data is not knowledge, data is the raw material from which we attempt to extract or build knowledge or failing that we make decisions with it.
To give a tangible example, when we use machine learning in an online payment for assessing whether a particular transaction may be fraudulent we may look at characteristics of the transaction and compare them with known attributes of fraudulent transactions, things like time of day, amount, credit card supplier, customer name, how familiar the customer is to this provider, whether the transaction is an outlier compared to usual customer behaviour… and so on. The machine learning can return us a score, say 90% probability that a transaction is fraudulent. And we may, to protect our platform from financial loss, decline all transactions where the probability of fraud is thought to be above 90%.
This does not mean, and must not be taken to mean, the transaction has been proven by technology to be fraudulent. At a certainty of 90% (if the probability is accurate), for every 90 fraudulent transactions prevented by such a system, 10 legitimate transactions would be blocked from completing. This also means fraud is not proved beyond a reasonable doubt, this evidence should not be used to take the customer and deprive them of their freedom for the crime of fraud. It would be an error in criminal law to conclude that the machine says it’s 90% likely the suspect is a criminal, therefore they must be guilty.
The technology cannot prove the case. Building a proper legal case still requires proper police work, piecing together the evidence and considering the negative case, that no crime was attempted by the suspected person at all. This should result in proper caution about how such technologies can and should be used in criminal investigations. They are not the answer machines, they can be supportive technologies guiding where to spend time and resources to build a case or find a suspect or person of interest.
In the case of using the technology for the Whitechapel Murders, it would be necessary to collect more information from an outcome, putting proper boots on the ground in Whitechapel, proving motive, means and opportunity. Making an arrest, interviewing the suspect… It’s all obviously too late to do that work.
Without the accompanying boots on the ground, the technology can only ever be advisory. At absolute best when applied to the Ripper case, it can provide interesting hints at possibilities but at worst, powerful tools for assisting live investigations is perhaps wasted on little more than an entertainment, a sensationalist media parlour game.
You seem determined to treat eyewitness testimony as though it is hard data, going so far as to suggest that if it is inaccurate the eyewitness may be subject to perjury charges. The standard for perjury is, thankfully much higher than this. Study after study has shown that recall and memory can be inaccurate and influenced. I can point you to multiple examples of H Division’s treatment of eyewitnesses in the late 19th and early 20th century not being up to the 21st century professional standard, they were at heightened risk of influencing witnesses compared to modern police.
A widely remarked upon stat from the Innocence Project says that “Eyewitness misidentification is the single greatest cause of wrongful convictions nationwide [in the USA], playing a role in 72% of convictions overturned through DNA testing”. In the UK, a man was convicted after being identified by 4 eyewitnesses and was later proved to be innocent. Source: https://www5.open.ac.uk/research-cen...ness-testimony
None of the eyewitnesses in these cases seem to have intended to mislead, they were in error and so were not guilty of perjury.
Betteridge's Law of Headlines often works for forum thread titles. So, I say ‘No’ and I am quite certain that ‘No’ is the correct answer. Count yourself lucky I do not answer ‘LOL, NO!’.
Comment