Melon Farmers Original Version

Censor Watch

2024: May

 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024 
Jan   Feb   Mar   April   May   June   Latest  


Commented: China would be proud...

Ofcom decides on overt political censorship of the words of Rishi Sunak being questioned on GB News

Link Here28th May 2024
Full story: Ofcom vs Free Speech...Ofcom's TV censorship extended to criticism of woke poliical ideas
Ofcom wrote:

People's Forum: The Prime Minister
GB News, 12 February 2024, 20:00

Ofcom received 547 complaints about this live, hour-long current affairs programme which featured the Prime Minister, Rishi Sunak, in a question-and-answer session with a studio audience about the Government's policies and performance, in the context of the forthcoming UK General Election.

We considered that this constituted a matter of major political controversy and a major matter relating to current public policy. When covering major matters, all Ofcom licensees must comply with the heightened special impartiality requirements in the Code. These rules require broadcasters to include and give due weight to an appropriately wide range of significant views within a programme or in clearly linked and timely programmes.

Ofcom had no issue with this programme's format in principle. Broadcasters have freedom to decide the editorial approach of their programmes as long as they comply with the Code. We took into account factors such as: the audience's questions to the Prime Minister; his responses; the Presenter's contribution; and whether due impartiality was preserved through clearly linked and timely programmes. In this case:

  • While some of the audience's questions provided some challenge to, and criticism of, the Government's policies and performance, audience members were not able to challenge the Prime Minister's responses and the Presenter did not do this to any meaningful extent.

  • The Prime Minister was able to set out some future policies that his Government planned to implement, if re-elected in the forthcoming UK General Election. Neither the audience or the Presenter challenged or otherwise referred to significant alternative views on these.

  • The Prime Minister criticised aspects of the Labour Party's policies and performance. While politicians are of course able to do this in programmes, licensees must ensure that due impartiality is preserved. Neither the Labour Party's views or positions on those issues, or any other significant views on those issues were included in the programme or given due weight.

  • The Licensee did not, and was not able to, include a reference in the programme to an agreed future programme in which an appropriately wide range of significant views on the major matter would be presented and given due weight.

We found that an appropriately wide range of significant viewpoints was not presented and given due weight in this case. As a result, Rishi Sunak had a mostly uncontested platform to promote the policies and performance of his Government in a period preceding a UK General Election.

GB News failed to preserve due impartiality, in breach of Rules 5.11 and 5.12 of the Code. Our decision is that this breach was serious and repeated.

We will therefore consider this breach for the imposition of a statutory sanction


Update: GB News to challege Ofcom's censorship in the courts

21st May 2024. See article from

A GB News spokesperson responded to the Ofcom censorship:

GB News has begun the formal legal process of challenging recent Ofcom decisions which go against journalists' and broadcasters' rights to make their own editorial judgements in line with the law and which also go against Ofcom's own rules.

Ofcom is obliged by law to uphold freedom of expression. Ofcom is also obliged to apply its rules fairly and lawfully. We believe that, for some time now, Ofcom has been operating in the exact opposite manner.

We cannot allow freedom of expression and media freedom to be trampled on in this way.

Freedom of the press is a civil right established by the British in the seventeenth century with the abolition of censorship and licensing of the printing press.

We refuse to stand by and allow this right to be threatened. As the People's Channel we champion this freedom; for our viewers, for our listeners, for everyone in the United Kingdom.


Ofsite Comment: Ofcom's contempt for GB News viewers

21st May 2024. See article from by Andrew Tettenborn

How, you might ask, could a show featuring independently selected, non-aligned voters directly quizzing an embattled PM breach impartiality rules? The Ofcom ruling makes no sense, at least if you look at it from the perspective of the average, level-headed man or woman in the street. But then, the apparatchiks who run Ofcom are neither particularly level-headed nor remotely reflective of the average voter.

See article from


Ofsite Comment: The real reason Ofcom has gone after GB News

27th May 2024. See article from by Toby Young



Upload moderation, the latest EU buzzword for messaging surveillance...

The latest EU proposal for governments to snoop on (once) private messaging

Link Here 26th May 2024
Full story: Mass snooping in the EU...The EU calls for member states to implement internet snooping with response to police requests in 6 hours
EU governments might soon implement messaging surveillance, euphemistically labelled as chat control, based on a new proposal by Belgium's Minister of the Interior. According to a leak obtained by Pirate Party MEP and shadow rapporteur Patrick Breyer , this could happen as early as June.

The proposal mandates that users of communication apps must agree to have all images and videos they send automatically scanned and potentially reported to the EU and police.

This agreement would be obtained through terms and conditions or pop-up messages. To facilitate this, secure end-to-end encrypted messenger services would need to implement monitoring backdoors, effectively causing a ban on private messaging. The Belgian proposal frames this as upload moderation, claiming it differs from client-side scanning. Users who refuse to consent would still be able to send text messages but would be barred from sharing images and videos.

The proposal first introduced on 8 May, has surprisingly gained support from several governments that were initially critical. It will be revisited on 24 May, and EU interior ministers are set to meet immediately following the European elections to potentially approve the legislation.



Know Your Creator...

Twitter introduces mandatory identity verification for paid creators

Link Here26th May 2024
Full story: Twitter Privacy...The sharing of user data for advertising purposes
X (Twitter) is now mandating the use of a government ID-based account verification system for users that earn revenue on the platform -- either for advertising or for paid subscriptions.

To implement this system, X has partnered with Au10tix, an Israeli company known for its identity verification solutions. #

The move raises profound questions about privacy and free speech, as X claims itself to be a free speech platform, and free speech and anonymity often go hand-in-hand.

X explains:

Starting today, all new creators must verify their ID to receive payouts. All existing creators must do so by July 1, 2024, the update to X's verification page now reads:






Offsite Article: False accusations...

Link Here26th May 2024
Full story: CCTV with facial recognition...Police introduce live facial recognition system
It could be you! Shops are using facial recognition cameras which make mistakes

See article from



Updated: Robot Dreams...

Further details of BBFC category cuts for PG rated cinema release

Link Here24th May 2024

Robot Dreams is a 2023 Spain/France animation drama by Pablo Berger
Starring Ivan Labanda, Tito Trifol and Rafa Calvo Melon Farmers link  BBFC link 2020  IMDb

BBFC category cuts were required for a distributor requested BBFC PG rated cinema release in 2024.

Summary Notes

Based on the popular graphic novel by the North American writer Sara Varon, ROBOT DREAMS tells the adventures and misfortunes of Dog and Robot in NYC during the '80s.


BBFC cut
category cuts by substitution
run: 102:29s
pal: 98:23s

IFCO cinema G

UK: BBFC PG rated for mild rude humour for mild rude humour after BBFC category cuts:
  • 2024 Artificial Eye Film Co. cinema release (rated 30/01/2024)

The BBFC commented:

The company obscured rude gestures in order to achieve their preferred category of PG. An uncut 12A classification was available.

Thanks to Jon:

The BBFC were initially set to give the film an uncut 12A rating. However, when Artificial Eye heard the film was going to be a 12A, it was Artificial Eye who said they wanted a PG, because the film would be ideal for younger kids, as the film was due to be released during the February school Half-Term. It was at that point, that the BBFC stated to Artificial Eye, that if they wanted a PG, the middle finger gesture would need to be removed.

Artificial Eye went back to the director, explained the situation, and between him and Artificial Eye, they agreed that the best solution would be to animate a black rectangle over the offending moment, so that adults would still get the joke, but kids would not, and the censorship would appease the BBFC without any cuts being made to the film's duration.

The animators added-in the black rectangle, the film was resubmitted, and the film was passed with the new censored middle finger scene as a PG.

The film could easily have been left uncut with a 12A, and it would not have affected anything. But the distributor stupidly wanted a kid-friendly rating, despite the fact most kids couldn't follow the film and didn't find it especially entertaining. So Artificial Eye really ballsed-up here.

Thanks to Scott, Jake and Chris.

The rude gesture was in fact the robot showing his middle finger when copying members of a street gang. The director was involved in deciding how to obscure the gesture by covering it up. The solution to cover up the gesture with a black strip proved more than acceptable as it gets a laugh from viewers realising that the robot is making the gesture and that this has been censored.

Ireland: The cut UK version was IFCO G rated for consumer advice: explores themes of friendship and loss with positive resolution:
  • 2024 Curzon cinema release (rated 16/01/2024)



More self classification...

Australian government consults about possible changes to its media censorship scheme

Link Here19th May 2024
Full story: Australian Censorship Review... Reviewing censorship law for all media

On 29 March 2023, the Minister for Communications, the Hon Michelle Rowland MP, announced the government would undertake a two-stage process to reform the Scheme. The first stage of reforms will be implemented in full during 2024 and include:

  • introducing mandatory minimum classifications for gambling-like content in computer games

  • expanding options for industry to self-classify content using individuals who have been trained and accredited by government

  • extending the Classification Board's powers to quality assure self-classification decisions

  • expanding exemptions from classification for low-risk cultural content

  • removing the need for content that has been classified under the Broadcasting Services Act 1992, or by the national broadcasters, and has not changed, to be re-classified for distribution in other formats.

The second stage of reforms will be more comprehensive in scope and will establish a framework for classification that is fit-for-purpose and will serve Australia into the future.

We want to hear your views on the following 3 key areas identified for consideration as part of the second stage of classification reforms:

  • clarifying the scope and purpose of the Scheme, including the types of content that should be classified

  • ensuring the classification guidelines continue to be aligned with, and responsive to, evolving community standards and expectations, and

  • establishing fit-for-purpose governance and regulatory arrangements for the Scheme, under a single national regulator responsible for media classification.



Soldier Blue...

With a long history of censor cuts, the film has been further cut by the BBFC for a 2024 video release

Link Here16th May 2024
Soldier Blue is a 1970 USA western by Ralph Nelson.
Starring Candice Bergen, Peter Strauss and Donald Pleasence. Melon Farmers link  BBFC link 2020  IMDb

Reportedly cut in the US for an MPAA rating in 1970. These cuts have long since been forgotten and the R rated version is the best available. The film was cut for an MPAA PG rating in 1974 but the R rating was restored in 1976.

The R rated version was cut by the BBFC for X rated cinema release and 18 rated VHS. The BBFC cuts for sexual violence were waived in 2004 for 18 rated DVD releases but new cuts were required for horse falls. In 2024 further BBFC cuts were required for previously unnoticed indecent images of a child.

Summary Notes

After a cavalry group is massacred by the Cheyenne, only two survivors remain: Honus, a naive private devoted to his duty, and Cresta, a young woman who had lived with the Cheyenne two years and whose sympathies lie more with them than with the US government. Together, they must try to reach the cavalry's main base camp. As they travel onward, Honus is torn between his growing affection for Cresta, and his disgust for her anti-American beliefs. They reach the cavalry campsite on the eve of an attack on a Cheyenne village, where Honus will learn which side has really been telling him the truth.


best available
run: 115m
pal: 110m
MPAA RUS: Uncut and MPAA R rated for:

There are reports that the original US R Rated version was cut to avoid an X rating in 1970. The film was cut for a PG rating in 1974 but the R rating was restored in 1976. Any such cuts have long since being forgotten though, and the current uncut US version is definitive.

mpaa cut
MPAA PGUS: Cut for an MPAA PG rating in 1974
BBFC cut
run: 115:19s
pal: 110:42s
18UK: BBFC 18 rated for strong violence, sexual violence for strong violence, sexual violence after BBFC cuts:
  • 2024 Studio Canal Blu-ray (rated 13/05/2024)

The BBFC commented:

Compulsory Cuts were required for illegal horse falls and potentially indecent images of a child.

BBFC cut
cut: 6s
run: 114:38s
pal: 110:03s
18Passed 18 after 6s of BBFC cuts with previous cuts for violence restored for:
  • UK 2008 Optimum R2 DVD

The BBFC commented: Four cuts were required to remove the presence of cruel, dangerous and illegal horse falls

total cuts
cut: 28s
run: 114:12s
pal: 109:38s

BBFC cut


18Passed 18 after 7s of BBFC cuts and 23s of distributor cuts for:
  • UK 2005 Momentum R2 DVD

Bizarrely the Momentum Pictures resubmission in 2005 ended up by being unnecessarily more cut than the previous version. The BBFC made the following statement: The BBFC requested 7 seconds of cuts to remove cruel horsefalls but the distributor made additional voluntary cuts of 23 seconds to remove an acceptable horsefall and part of a rape scene. These cuts were not required by the BBFC but were in line with cuts made to previous video releases.

BBFC cut
cut: 23s
run: 114:02s
pal: 109:28s
18Passed 18 after 23s of BBFC cuts for:
  • UK 1999 BMG VHS

The BBFC cuts were:

  • The cuts as per previous Embassy version except that slightly less cut from the rape scene
BBFC cutcut
cut: 36s
run: 113:25s
pal: 108:53s
18Passed after 36s of BBFC cuts for:
  • UK 1986 Embassy VHS

The cuts were:

  • Cuts are to a scene showing the rape of an Indian woman, during the massacre of the village
  • Cuts to a shot of a naked Indian woman strung up by her wrists with blood on her breasts
  • A horsefall has been removed
BBFC cut
sub: 114:39s
X certPassed X (18) after BBFC cuts for:
  • UK 1970 cinema release

The BBFC cuts were:

  • Cuts to a scene showing the rape of an Indian woman, during the massacre of the village
  • Decapitation of indian squaw removed



Competitors get all steamed up...

Video game distribution platform Steam has been blocked by all ISPs in Vietnam

Link Here11th May 2024
The video game distribution platform Steam has been banned entirely in Vietnam.

Vietnamese players took to Steam forums, saying all of the country's internet providers blocked access to both Valve's app and browser. One commenter said they spoke to someone who claimed the order came from above.

Neither Valve or anyone from Vietnam's government have spoken on the matter.

An article in VietnamNet suggested that the ban may be connected to domestic publishers. A representative for one domestic publisher claimed Steam can put out games in the country without having to seek permission from the local government like Vietnamese developers have to. According to them, Valve's alleged ability to break the rules is an injustice to domestic publishers.



Making Britain the craziest place to run a business online...

Ofcom goes full on nightmare with age/ID verification for nearly all websites coupled with a mountain of red tape and expense

Link Here 8th May 2024
Full story: Online Safety Bill...UK Government legislates to censor social media
With a theatrical flourish clamouring to the 'won't somebody think of the children' mob, Ofcom has proposed a set of censorship rules that demand strict age/ID verification for practically ever single website that allows users to post content. On top of that they are proposing the most onerous mountain of expensive red tape seen in the western world.

There are few clever slight of hands that drag most of the internet into the realm of strict age/ID verification. Ofcom argues that nearly all websites will have child users because 16 and 17 year old 'children' have more or less the same interests as adults and so there is no content that is not of interest to 'children'

And so all websites will have to offer content that is appropriate to all age children or else put in place strict age/ID verification to ensure that content is appropriate to age.

And at every stage of deciding website policy, Ofcom is demanding extensive justification of decision made and proof of data used in making decisions. The amount of risk assessments, documents, research, evidence required makes the 'health and safety' regime look like child's play.

On occasions in the consultation documents Ofcom acknowledges that this will impose a massive administrative burden, but swats away criticism by noting that is the fault of the Online Safety Act law itself, and not Ofcom's fault.


Comment: Online Safety proposals could cause new harms

See article from

Ofcom's consultation on safeguarding children online exposes significant problems regarding the proposed implementation of age-gating measures. While aimed at protecting children from digital harms, the proposed measures introduce risks to cybersecurity, privacy and freedom of expression.

Ofcom's proposals outline the implementation of age assurance systems, including photo-ID matching, facial age estimation, and reusable digital identity services, to restrict access to popular platforms like Twitter, Reddit, YouTube, and Google that might contain content deemed harmful to children.

Open Rights Group warns that these measures could inadvertently curtail individuals' freedom of expression while simultaneously exposing them to heightened cybersecurity risks.

Jim Killock, Executive Director of Open Rights Group, said:

Adults will be faced with a choice: either limit their freedom of expression by not accessing content, or expose themselves to increased security risks that will arise from data breaches and phishing sites.

Some overseas providers may block access to their platforms from the UK rather than comply with these stringent measures.

We are also concerned that educational and help material, especially where it relates to sexuality, gender identity, drugs and other sensitive topics may be denied to young people by moderation systems.

Risks to children will continue with these measures. Regulators need to shift their approach to one that empowers children to understand the risks they may face, especially where young people may look for content, whether it is meant to be available to them or not.

Open Rights Group underscores the necessity for privacy-friendly standards in the development and deployment of age-assurance systems mandated by the Online Safety Act. Killock notes, Current data protection laws lack the framework to pre-emptively address the specific and novel cybersecurity risks posed by these proposals.

Open Rights Group urges the government to prioritize comprehensive solutions that incorporate parental guidance and education rather than relying largely on technical measures.



Big Brother is watching with a hair trigger system to report images resembling child sexual abuse...

Opposition to a secretive and dangerous EU proposal to force snooping software on people's phones and computers

Link Here5th May 2024
Full story: Internet Encryption in the EU...Encryption is legal for the moment but the authorites are seeking to end this
A controversial and secretive push by European Union lawmakers to legally require messaging platforms to scan citizens' private communications for child sexual abuse material (CSAM) could lead to millions of false positives per day, hundreds of security and privacy experts have warned in an open letter .

Concern over the EU proposal has been building since the Commission proposed the CSAM-scanning plan two years ago, with independent experts, lawmakers across the European Parliament and even the bloc's own Data Protection Supervisor among those sounding the alarm.

The EU proposal would not only require messaging platforms that receive a CSAM detection order to scan for known CSAM, but they would also have to use unspecified detection scanning technologies to try to pick up unknown CSAM and identify grooming activity as it's taking place, leading to accusations of lawmakers indulging in magical thinking-levels of technosolutionism.

The open letter has been signed by 309 experts from 35 countries. The letter reads:

Dear Members of the European Parliament, Dear Member States of the Council of the European Union,

Joint statement of scientists and researchers on EU's new proposal for the Child Sexual Abuse Regulation: 2nd May 2024

We are writing in response to the new proposal for the regulation introduced by the Presidency on 13 March 20241. The two main changes with respect to the previous proposal aim to generate more targeted detection orders, and to protect cybersecurity and encrypted data. We note with disappointment that these changes fail to address the main concerns raised in our open letter from July 2023 regarding the unavoidable flaws of detection techniques and the significant weakening of the protection that is inherent to adding detection capabilities to end-to-end encrypted communications. The proposal's impact on end-to-end encryption is in direct contradiction to the intent of the European Court of Human Rights's decision in Podchasov v. Russia on 13 February, 2024. We elaborate on these aspects below.

Child sexual abuse and exploitation are serious crimes that can cause lifelong harm to survivors; certainly it is essential that governments, service providers, and society at large take major responsibility in tackling these crimes. The fact that the new proposal encourages service providers to employ a swift and robust process for notifying potential victims is a useful step forward.

However, from a technical standpoint, to be effective, this new proposal will also completely undermine communications and systems security. The proposal notably still fails to take into account decades of effort by researchers, industry, and policy makers to protect communications. Instead of starting a dialogue with academic experts and making data available on detection technologies and their alleged effectiveness, the proposal creates unprecedented capabilities for surveillance and control of Internet users. This undermines a secure digital future for our society and can have enormous consequences for democratic processes in Europe and beyond.

1. The proposed targeted detection measures will not reduce risks of massive surveillance

The problem is that flawed detection technology cannot be relied upon to determine cases of interest. We previously detailed security issues associated with the technologies that can be used to implement detection of known and new CSA material and of grooming, because they are easy to circumvent by those who want to bypass detection, and they are prone to errors in classification. The latter point is highly relevant for the new proposal, which aims to reduce impact by only reporting users of interest defined as those who are flagged repeatedly (as of the last draft: twice for known CSA material and three times for new CSA material and grooming). Yet, this measure is unlikely to address the problems we raised.

First, there is the poor performance of automated detection technologies for new CSA material and for the detection of grooming. The number of false positives due to detection errors is highly unlikely to be significantly reduced unless the number of repetitions is so large that the detection stops being effective. Given the large amount of messages sent in these platforms (in the order of billions), one can expect a very large amount of false alarms (in the order of millions).

Second, the belief that the number of false positives will be reduced significantly by requiring a small number of repetitions relies on the fallacy that for innocent users two positive detection events are independent and that the corresponding error probabilities can be multiplied. In practice, communications exist in a specific context (e.g., photos to doctors, legitimate sharing across family and friends). In such cases, it is likely that parents will send more than one photo to doctors, and families will share more than one photo of their vacations at the beach or pool, thus increasing the number of false positives for this person. It is therefore unclear that this measure makes any effective difference with respect to the previous proposal.

Furthermore, to realize this new measure, on-device detection with so-called client-side scanning will be needed. As we previously wrote, once such a capability is in place, there is little possibility of controlling what is being detected and which threshold is used on the device for such detections to be considered of interest. We elaborate below.

High-risk applications may still indiscriminately affect a massive number of people. A second change in the proposal is to only require detection on (parts of) services that are deemed to be high-risk in terms of carrying CSA material.

This change is unlikely to have a useful impact. As the exchange of CSA material or grooming only requires standard features that are widely supported by many service providers (such as exchanging chat messages and images), this will undoubtedly impact many services. Moreover, an increasing number of services deploy end-to-end encryption, greatly enhancing user privacy and security, which will increase the likelihood that these services will be categorised as high risk. This number may further increase with the interoperability requirements introduced by the Digital Markets Act that will result in messages flowing between low-risk and high-risk services. As a result, almost all services could be classified as high risk. This change is also unlikely to impact abusers. As soon as abusers become aware that a service provider has activated client side scanning, they will switch to another provider that will in turn become high risk; very quickly all services will be high risk, which defeats the purpose of identifying high risk services in the first place. And because open-source chat systems are currently easy to deploy, groups of offenders can easily set up their own service without any CSAM detection capabilities.

We note that decreasing the number of services is not even the crucial issue, as this change would not necessarily reduce the number of (innocent) users that would be subject to detection capabilities. This is because many of the main applications targeted by this regulation, such as email, messaging, and file sharing are used by hundreds of millions of users (or even billions in the case of WhatsApp).

Once a detection capability is deployed by the service, it is not technologically possible to limit its application to a subset of the users. Either it exists in all the deployed copies of the application, or it does not. Otherwise, potential abusers could easily find out if they have a version different from the majority population and therefore if they have been targeted. Therefore, upon implementation, the envisioned limitations associated with risk categorization do not necessarily result in better user discrimination or targeting, but in essence have the same effect for users as a blanket detection regulation.

2. Detection in end-to-end encrypted services by definition undermines encryption protection The new proposal has as one of its goals to protect cyber security and encrypted data, while keeping services using end-to-end encryption within the scope of detection orders. As we have explained before, this is an oxymoron.

The protection given by end-to-end encryption implies that no one other than the intended recipient of a communication should be able to learn any information about the content of such communication. Enabling detection capabilities, whether for encrypted data or for data before it is encrypted, violates the very definition of confidentiality provided by end-to-end encryption. Moreover, the proposal also states that This Regulation shall not create any obligation that would require [a service provider] to decrypt or create access to end-to-end-encrypted data, or that would prevent the provision of end-to-end encrypted services. This can be misleading, as whether the obligation to decrypt exists or not, the proposal undermines the protection provided by end-to-end encryption.

This has catastrophic consequences. It sets a precedent for filtering the Internet, and prevents people from using some of the few tools available to protect their right to a private life in the digital space; it will have a chilling effect, in particular to teenagers who heavily rely on online services for their interactions. It will change how digital services are used around the world and is likely to negatively affect democracies across the globe. These consequences come from the very existence of detection capabilities, and thus cannot be addressed by either reducing the scope of detection in terms of applications or target users: once they exist, all users are in danger. Hence, the requirement of Art. 10 (aa) that a detection order should not introduce cybersecurity risks for which it is not possible to take any effective measures to mitigate such risk is not realistic, as the risk introduced by client side scanning cannot be mitigated effectively.

3. Introducing more immature technologies may increase the risk The proposal states that age verification and age assessment measures will be taken, creating a need to prove age in services that before did not require so. It then bases some of the arguments related to the protection of children on the assumption that such measures will be effective. We would like to point out that at this time there is no established, well-proven technological solution that can reliably perform these assessments. The proposal also states that such verification and assessment should preserve privacy. We note that this is a very hard problem. While there is research towards technologies that could assist in implementing privacy-preserving age verification, none of them are currently in the market.5 Integrating them into systems in a secure way is far from trivial. Any solutions to this problem need to be very carefully scrutinized to ensure that the new assessments do not result in privacy harms or discrimination causing more harm than the one they were meant to prevent.

4. Lack of transparency It is quite regretful that the proposers failed to reach out to security and privacy experts to understand what is feasible before putting forth a new proposal that cannot work technologically. The proposal pays insufficient attention to the technical risks and imposes - while claiming to be technologically neutral - requirements that cannot be met by any state-of-the-art system (e.g., low false-positive rate, secrecy of the parameters and algorithms when deployed in a large number of devices, existence of representative simulated CSA material).

We strongly recommend that not only should this proposal not move forward, but that before such a proposal is presented in future, the proposers engage in serious conversations about what can and cannot be done within the context of guaranteeing secure communications for society.

5. Secure paths forward for child protection Protecting children from online abuse while preserving their right to secure communications is critical. It is important to remember that CSAM content is the output of child sexual abuse. Eradicating CSAM relies on eradicating abuse, not only abuse material. Proven approaches recommended by organisations such as the UN for eradicating abuse include education on consent, on norms and values, on digital literacy and online safety, and comprehensive sex education; trauma-sensitive reporting hotlines; and keyword-search based interventions. Educational efforts can take place in partnership with platforms, which can prioritise high quality educational results in search or collaborate with their content creators to develop engaging resources.

We recommend substantial increases in investment and effort to support existing proven approaches to eradicate abuse, and with it, abusive material. Such approaches stand in contrast to the current techno-solutionist proposal, which is focused on vacuuming up abusive material from the internet at the cost of communication security, with little potential for impact on abuse perpetrated against children.

UK signatories

Dr. Ruba Abu-Salma King's College London Prof. Martin Albrecht King's College London Dr. Andrea Basso University of Bristol Prof. Ioana Boureanu University of Surrey Prof. Lorenzo Cavallaro University College London Dr. Giovanni Cherubin Microsoft Dr. Benjamin Dowling University of Sheffield Dr. Francois Dupressoir University of Bristol Dr. Jide Edu University of Strathclyde Dr. Arthur Gervais University College London Prof. Hamed Haddadi Imperial College London Prof. Alice Hutchings University of Cambridge Dr. Dennis Jackson Mozilla Dr. Rikke Bjerg Jensen Royal Holloway University of London Prof. Keith Martin Royal Holloway University of London Dr. Maryam Mehrnezhad Royal Holloway University of London Prof. Sarah Meiklejohn University College London Dr. Ngoc Khanh Nguyen King's College London Prof. Elisabeth Oswald University of Birmingham Dr. Daniel Page University of Bristol Dr. Eamonn Postlethwaite King's College London Dr. Kopo Marvin Ramokapane University of Bristol Prof. Awais Rashid University of Bristol Dr. Daniel R. Thomas University of Strathclyde Dr. Yiannis Tselekounis Royal Holloway University of London Dr. Michael Veale University College London Prof. Dr. Luca Vigano King's College London Dr. Petros Wallden University of Edinburgh Dr. Christian Weinert Royal Holloway University of London



Commented Safe methods prove elusive...

Australian government to spend its own money on trying to find a safe method of age/ID verification for porn viewing

Link Here5th May 2024
Full story: Age Verification for Porn...Endangering porn users for the sake of the children
As part of its efforts to combat violence against women, the government of Australian Prime Minister Anthony Albanese has announced funding to test age/ID verification methods for pornography websites in a pilot program. This move came after Albanese and the national cabinet ruled in 2023 that mandatory age verification was not yet an option.

AUS $6.5 million has been allocated for a pilot of age assurance to test the technology's effectiveness. The pilot will identify available age assurance products and assess their efficacy, including in relation to privacy and security. The outcomes of this pilot will support the eSafety Commissioners' ongoing implementation of censorship rules under the Online Safety Act.

Australia's prime minister has also moved to ban deepfake and artificial intelligence pornography as part of a $925million bid to counter a  rise in violence against women. Sharing sexually explicit material using artificial intelligence will also be subject to serious criminal penalties.

Albanese noted community concerns about toxic male views online and young men's exposure to violent imagery on the internet.


Offsite Comment: The Australian Government Is Making Porn a Scapegoat for Rising Violence Against Women

5th May 2024. Thanks to Trog. See article from by Darcy Deviant

Here is an artlcie offering a very sensible counter argument to the usual porn is bad diatribes:

As a sex worker, the most concerning part of this conversation is the use of the sex industry as a political scapegoat for men's violence.

Let's be clear: the porn industry was never created to provide sex education to children. But let's also be honest: if your child is actively seeking out pornography, or so-called violent pornography, perhaps there's a gap in their learning about sex and sexuality that the education system or a guardian has failed to fill.

See article from


 2008   2009   2010   2011   2012   2013   2014   2015   2016   2017   2018   2019   2020   2021   2022   2023   2024 
Jan   Feb   Mar   April   May   June   Latest  

Censor Watch logo





Censorship News Latest

Daily BBFC Ratings

Site Information