|
The BBFC decides that other streamers may re-use BBFC ratings determined by the likes of Netflix
|
|
|
| 23rd April 2024
|
|
| See
meeting minutes [pdf] from darkroom.bbfc.co.uk
|
The BBFC commented in a recent board meeting minutes: BBFC Classifiers discussed a policy proposal to allow BBFC age ratings issued by self-rating partners such as Netflix to be made available for wider online use by other VoD
services licensed to carry BBFC ratings. This proposal will promote greater ratings consistency across the VoD landscape, to help families make safe and informed viewing decisions. The Classifiers approved the new policy to be
implemented on a 12 month trial basis, after which the BBFC will review its impact and effectiveness. |
|
European police chiefs disgracefully call for citizens to lose their basic internet protection from Russian and Chinese spies, scammers, thieves and blackmailers.
|
|
|
| 23rd April 2024
|
|
| See article from
reclaimthenet.org See police statement [pdf] from docs.reclaimthenet.org |
European police chiefs have called for Europeans to be deprived of basic internet security used to protect against Russian & Chinese spies, scammers, thieves and blackmailers. The police chiefs write: Joint
Declaration of the European Police Chiefs We, the European Police Chiefs, recognise that law enforcement and the technology industry have a shared duty to keep the public safe, especially children. We have a proud partnership
of complementary actions towards that end. That partnership is at risk. Two key capabilities are crucial to supporting online safety. First, the ability of technology companies to reactively provide to law
enforcement investigations -- on the basis of a lawful authority with strong safeguards and oversight -- the data of suspected criminals on their service. This is known as lawful access. Second, the ability
of technology companies proactively to identify illegal and harmful activity on their platforms. This is especially true in regards to detecting users who have a sexual interest in children, exchange images of abuse and seek to commit contact sexual
offences. The companies currently have the ability to alert the proper authorities -- with the result that many thousands of children have been safeguarded, and perpetrators arrested and brought to justice. These are
quite different capabilities, but together they help us save many lives and protect the vulnerable in all our countries on a daily basis from the most heinous of crimes, including but not limited to terrorism, child sexual abuse, human trafficking, drugs
smuggling, murder and economic crime. They also provide the evidence that leads to prosecutions and justice for victims of crime. We are, therefore, deeply concerned that end to end encryption is being rolled out in a way that
will undermine both of these capabilities. Companies will not be able to respond effectively to a lawful authority. Nor will they be able to identify or report illegal activity on their platforms. As a result, we will simply not be able to keep the
public safe. Our societies have not previously tolerated spaces that are beyond the reach of law enforcement, where criminals can communicate safely and child abuse can flourish. They should not now. We cannot let ourselves be
blinded to crime. We know from the protections afforded by the darkweb how rapidly and extensively criminals exploit such anonymity. We are committed to supporting the development of critical innovations, such as encryption, as a
means of strengthening the cyber security and privacy of citizens. However, we do not accept that there need be a binary choice between cyber security or privacy on the one hand and public safety on the other. Absolutism on either side is not helpful.
Our view is that technical solutions do exist; they simply require flexibility from industry as well as from governments. We recognise that the solutions will be different for each capability, and also differ between platforms. We
therefore call on the technology industry to build in security by design, to ensure they maintain the ability to both identify and report harmful and illegal activities, such as child sexual exploitation, and to lawfully and exceptionally act on a lawful
authority. We call on our democratic governments to put in place frameworks that give us the information we need to keep our publics safe. Trends in crime are deeply concerning and show how offenders
increasingly use technology to find and exploit victims and to communicate with each other within and across international boundaries. It must be our shared objective to ensure that those who seek to abuse these platforms are identified and caught, and
that the platforms become more safe not less.
See article from
reclaimthenet.org Here we have Europol and the UK's National Crime Agency (NCA), teaming up to attack Meta for the one thing the company is apparently trying to do right. And that's implementing in its products
end-to-end encryption (E2EE), the very, necessary, irreplaceable software backbone of a safe and secure internet for everybody. Yet that is what many governments, and here we see the EU via Europol, and the UK, keep attempting to damage.
But mass surveillance is a hard sell, so the established pitch is to link the global and overall internet problem, to that of the safety of children online, and justify it that way. The Europol executive
director, Catherine De Bolle, compared E2EE to sending your child into a room full of strangers and locking the door. And yet, the technological truth and reality of the situation is that undermining E2EE is akin to giving the key to your front door and
access to everybody in it, children included, to somebody you trust (say, governments and organizations who like you to take their trustworthiness for granted). But once a copy of that key is out, it can be obtained and used by
anybody out there to get into your house at any time, for any reason. That includes governments and organizations you don't trust or like, straight-up criminals -- and anything active on the web in between.
|
|
Instagram will detect nude photos in private messages and initially blur them
|
|
|
| 21st April 2024
|
|
| See blog post from
about.instagram.com |
New Tools to Help Protect Against Sextortion and Intimate Image Abuse We're testing new features to help protect young people from sextortion and intimate image abuse, and to make it more difficult for potential scammers
and criminals to find and interact with teens. We're also testing new ways to help people spot potential sextortion scams, encourage them to report and empower them to say no to anything that makes them feel uncomfortable. We've started sharing more
signals about sextortion accounts to other tech companies through Lantern, helping disrupt this criminal activity across the internet. While people overwhelmingly use DMs to share what they love with their friends, family or
favorite creators, sextortion scammers may also use private messages to share or ask for intimate images. To help address this, we'll soon start testing our new nudity protection feature in Instagram DMs, which blurs images detected as containing nudity
and encourages people to think twice before sending nude images. This feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending
their own images in return. Nudity protection will be turned on by default for teens under 18 globally, and we'll show a notification to adults encouraging them to turn it on. When nudity protection is
turned on, people sending images containing nudity will see a message reminding them to be cautious when sending sensitive photos, and that they can unsend these photos if they've changed their mind. Anyone who tries to forward a
nude image they've received will see a message encouraging them to reconsider. When someone receives an image containing nudity, it will be automatically blurred under a warning screen, meaning the recipient isn't confronted with
a nude image and they can choose whether or not to view it. We'll also show them a message encouraging them not to feel pressure to respond, with an option to block the sender and report the chat. Nudity protection uses on-device
machine learning to analyze whether an image sent in a DM on Instagram contains nudity. Because the images are analyzed on the device itself, nudity protection will work in end-to-end encrypted chats, where Meta won't have access to these images --
unless someone chooses to report them to us.
|
|
Meta outlines plan for operating systems and app stores to take control of age/ID verification
|
|
|
| 19th April 2024
|
|
| See article from biometricupdate.com
|
When the British Government started work on online censorship laws I think it envisaged that age/ID verification would create a business opportunity for British start up companies to exploit the market so created. Unfortunately for them it looks
inevitably set that the usual US internet giants will be the ones to profit from the requirements. In fact Meta has been speaking of its ideas that operating system companies and app stores should be the ones to implement age/ID verification. Meta
is calling for implementing age verification across Europe and proposed a way to do it. The company wants to ensure that parents only need to verify the age of their child once and noted that the most effective way of achieving this would be to have
operating systems or app stores complete the verification process. The move would pass on the responsibility of age verification from social media platforms to firms such as Apple and Google. Other platforms have also in argued in favor of the
solution, including Twitter and Match, the company behind dating apps like Tinder, Hinge and OkCupid. Meta delivered its statement during a hearing of an Irish parliament committee focused on children's rights this week. Meta has been taking
different approaches to try and ease pressure from global censors on the age verification question. The company has been experimenting with facial age estimation technology from UK firm Yoti in several countries. |
|
|
|
|
| 19th April 2024
|
|
|
Online porn restrictions are leading to a VPN boom See article from popsci.com |
|
Drinks censor finds in favour of stylised brewery pump clip
|
|
|
| 14th April 2024
|
|
| See article from portmangroup.org.uk |
Complaint: I would like to express deep concerns about the name, branding and pump clip design of Twickenham Brewery's ale Naked Ladies. In itself, the school-child sniggering tone of the name is offensive,
representing outmoded and sexist attitudes to women which should have no place in public life, and certainly not in any industry that wishes to survive in the 21st century. Moreover, from my recent experience of ordering a pint in
a London pub in which this was the only available real ale, the name of the beer and pump clip imagery present real problems of offence and embarrassment. As a middle-aged man, the experience of saying the name of the beer to the young woman serving at
the bar was awkward and unsettling for all concerned. As a pub-goer, I would obviously prefer not to be made uncomfortable by the simple act of ordering a pint. More importantly, bar staff should be able to go to work without being subjected to sexist
and sexualised language and imagery which, given age/power differentials with customers, could well be interpreted as harassment or abuse.
Company statement: The company stated that Naked Ladies was
a best-selling beer and had been available for over 19 years. The company explained that it was one of a range of beers which were all named after local landmarks, with the name Naked Ladies relating to statues at York House, the home of Richmond and
Twickenham Council. While the statues had no official name, locally they had become known as the Naked Ladies and usage of the local name had transferred more formally with the statues also listed as an entity with Historic England. The company explained
that in addition to this, the name was also used to refer to the statues in several other sources, including The York House Society, reflecting that the name was well known by the general public. The company explained that the
pump clip included a graphical representation of the largest statue and it had used such branding for approximately 10 years. The company explained that at all stages, it endeavored to ensure the link between the name and the statue was obvious and
clear. This included a description of where the name derived on the company's website and on the packaging of the bottles which referenced the local landmark. The Panel's assessment The Panel discussed the
product name Naked Ladies and its historical context as explained by the producer. The Panel noted that the name was a colloquial one used to refer to a group of statues at York House in Twickenham which was a fairly well-known landmark as an entity
listed with Historic England. The Panel discussed the company's response and noted that the name had not been used gratuitously as the packaging and company website incorporated descriptive language designed to explain the historical context of the
statue. The Panel noted that due to its smaller size and limited space the pump clip did not include the same information but considered that the reference to Twickenham in the company's name did provide some context between the name of the beer and the
local landmark. The Panel determined that the name may be distasteful to some but that the overall impression, as opposed to the name in isolation, would determine whether the pump clip caused serious or widespread offence under Code rule 3.3.
The Panel discussed the artwork presented on the pump clip which depicted one of the referenced local statues, a naked woman, at York House. The Panel considered the Portman Group's accompanying guidance to Code rule 3.3 and noted
that to breach the rule in relation to sexual objectification the packaging or marketing would need to incorporate elements that were demeaning, derogatory, gratuitous or overly sexualised. The Panel considered that the design was artistically stylised
and akin to art deco in style with no identifying detail added to any of the statue's features. The Panel noted that the naked statue was modestly presented with its pubic area covered by hops and considered that nudity in and of itself would not
inherently cause serious or widespread offence, particularly nudity depicted by an art deco statue. The Panel noted that there was no undue focus on the statue's breasts which were low definition and portrayed through a shadowing technique in keeping
with the depiction of the rest of the statue. The Panel discussed the pose of the statue and noted that it was not positioned in a sexualised manner which meant that it did not objectify the statue based on its gender or sexuality. The Panel considered
the artwork in its historical context alongside the name Naked Ladies and considered that the pump clip was not demeaning, discriminatory or derogatory in its portrayal of women more broadly. The Panel considered that the
depiction of the statue and the name Naked Ladies did not cause serious or widespread offence. Accordingly, the complaint was not upheld under Code rule 3.3.
|
|
EU lobby group proposes to censor 'disinformation' via ICANN's powers held over worldwide domain name controls
|
|
|
| 10th April 2024
|
|
| See article from reclaimthenet.org
|
EU DisinfoLab, a censorship lobby group regularly making policy recommendations to the EU and member-states, is now pushing for a security structure created by ICANN (the Internet Corporation for Assigned Names and Numbers) to be utilized to censor what
it deems as disinformation. Attempting to directly use ICANN would be highly controversial. Given its importance in the internet infrastructure -- ICANN manages domain names globally -- and the fact content control is not among its tasks (DisinfoLab
says ICANN refuses to do it) -- this would represent a huge departure from the organization's role as we understand it today. But now DisinfoLab proposes to use the structure already created by ICANN against legitimate security threats, to police the
internet for content that somebody decides to treat as disinformation. It would require minimal amount of diligence and cooperation from registries, a blog post said, to accept ICANN-style reports and revoke a site's domain name. |
|
6000 people avail themselves of Scotland's new free service to use the police to settle scores under the Hate Crime Act
|
|
|
| 8th April 2024
|
|
| See article from
reclaimthenet.org |
Police Scotland is grappling with potential budgetary pressures and service reductions. David Threadgold of the Scottish Police Federation (SPF) has raised concerns about the financial impact of the Hate Crime and Public Order (Scotland) Act. According
to him, the legislation has already led to an overload of calls, with over 6,000 logged since its enactment. Threadgold's worry centers on the unforeseen costs of handling these cases, particularly the overtime payments for control room staff. He
believes these expenses will reverberate throughout the year, affecting other police services. Calum Steele, former general secretary of the SPF, echoes these concerns. As reported by The Scotsman, Steele criticized Police Scotland's preparation for the
Act, calling it negligently unprepared and pointing out that the additional costs were predictable. The legislation's impact extends beyond financial strains. The Act has resulted in a notable rise in the logging of non-crime hate incidents,
incidents perceived as hateful but not necessarily criminal. This increase has prompted concerns about a potential inundation of trivial or malicious complaints, especially in the context of highly charged events like football matches. Tory MSP Murdo
Fraser has already lodged a complaint over a tweet he posted being logged as a hate incident. |
|
Alabama State House passes bill to require Net Nanny like filters to be installed on all phones and tablets and turned on for minors
|
|
|
| 8th April 2024
|
|
| See article from
al.com |
The Alabama House of Representatives has passed a bill that would require makers of phones and tablets to fit the devices with a filter to block pornography that would be activated when the device is activated for use by a minor. The bill, HB167 by
Representative Chris Sells passed by a vote of 98-0. It moves to the Senate. HB167 says that beginning on Jan. 1, 2026, all smartphones and tablets activated in the state must contain a filter, determine the age of the user during activation and
account set-up, and set the filter to on for minor users. The filter must be able to block access to obscenity as it is defined under state law. The bill says a manufacturer can be subject to civil and criminal liability if a device is
activated in the state, does not, upon activation, enable a filter that complies with the law, and a minor accesses obscene material on the device. The bill says retailers would not be liable. |
|
Chechnya bans fast or slow music
|
|
|
| 8th April 2024
|
|
| See article from themoscowtimes.com
|
Authorities in Russia's republic of Chechnya have imposed limits on music tempos to abide by strict cultural norms in the Muslim-majority region. From now on all musical, vocal and choreographic works should correspond to a tempo of 80 to 116 beats
per minute, Chechnya's Culture Ministry said in a statement earlier this week. The new tempo standard, which is relatively slow in the context of popular music, was announced following Chechen Culture Minister Musa Dadayev's meeting with local
state and municipal artists. Chechen leader Ramzan Kadyrov had instructed Dadayev to make Chechen music conform to the Chechen mentality, according to the statement. Local artists were ordered to rewrite their music by June 1 to accommodate
the changes. Otherwise, they would not be allowed for public performance, the Culture Ministry wrote on the messaging app Telegram. |
| |