Your Daily Broadsheet

 Latest news



  Freedom in the World 2018: Democracy in Crisis...

Freedom House publishes its annual survey


Link Here 19th February 2018
freedom house logoFreedom House has published its annual survey of freedom around the word. Its key findings are somewhat grim:
  • Democracy faced its most serious crisis in decades in 2017 as its basic tenets--including guarantees of free and fair elections, the rights of minorities, freedom of the press, and the rule of law--came under attack around the world.
  • Seventy-one countries suffered net declines in political rights and civil liberties, with only 35 registering gains. This marked the 12th consecutive year of decline in global freedom.
  • The United States retreated from its traditional role as both a champion and an exemplar of democracy amid an accelerating decline in American political rights and civil liberties.
  • Over the period since the 12-year global slide began in 2006, 113 countries have seen a net decline, and only 62 have experienced a net improvement.

See full report from freedomhouse.org

 

  Images of compromise...

Google tweaks its Image Search at the behest of Getty Images


Link Here 18th February 2018
google images logoGoogle has tweaked its Google's image search to make it slightly more difficult to view images in full size before downloading them. Google has also added a more prominent copyright warning.

Google acted as part of a peace deal with photo library Getty Images.  In 2017, Getty Images complained to the European Commission, accusing Google of anti-competitive practices.

Google said it had removed some features from image search, including the view image button. Images can still be viewed in full size from the right click menu, at least on my Windows version of Firefox. Google also removed the search by image button, which was an easy way of finding larger copies of photographs. Perhaps the tweaks are more about restricting the finding of high resolution version of image rather than worrying about standard sized images.

Getty Images is a photo library that sells the work of photographers and illustrators to businesses, newspapers and broadcasters. It complained that Google's image search made it easy for people to find Getty Images pictures and take them, without the appropriate permission or licence.

In a statement, Getty Images said:

We are pleased to announce that after working cooperatively with Google over the past months, our concerns are being recognised and we have withdrawn our complaint.

 

  The pits of world censorship...

Turkish TV company fined for unbeeped strong language on its internet player


Link Here 18th February 2018
cukurTurkey's TV censor RT3cK has set a new precedent by issuing a fine to a channel that included strong language language on its website that was beeped out when broadcast on TV.

The fine came when Show TV posted a clip of a sweary character from the hit mafia TV show Çukur (The Pit)  without the usual heavy censorship on its website.

A draft law that would enable RT3cK to regulate online video content is in the works, but the body appears to have begun regardless.

 

 Offsite Article: Germany edges toward Chinese-style rating of citizens...


Link Here 18th February 2018
schufa logo China is experimenting with a dystopian social credit system which grades every citizen based on their behavior. Germany is sleepwalking in the same direction. By Heike Jahberg

See article from global.handelsblatt.com

 

 Offsite Article: Out and out overblocking...


Link Here 18th February 2018  full story: Crap Internet Blocking...Cheapo automated filters are not up to the job
gay star news logo Sky UK block gay-teen support website deemed pornographic by its algorithms

See article from gaystarnews.com

 

 Offsite Article: Body positive art seen in a negative light...


Link Here 18th February 2018
minnesota body positive PC confusion in Minnesota

See article from citypages.com

 

  Endangering porn stars...

German courts finds that Facebook's real name policy is illegal and a Belgian court tells Facebook to delete tracking data on people not signed up to Facebook


Link Here 17th February 2018  full story: Facebook Privacy...Facebook criticised for discouraging privacy

Facebook logoGermany

In a ruling of particular interest to those working in the adult entertainment biz, a German court has ruled that Facebook's real name policy is illegal and that users must be allowed to sign up for the service under pseudonyms.

The opinion comes from the Berlin Regional Court and disseminated by the Federation of German Consumer Organizations, which filed the suit against Facebook. The Berlin court found that Facebook's real name policy was a covert way of obtaining users' consent to share their names, which are one of many pieces of information the court said Facebook did not properly obtain users' permission for.

The court also said that Facebook didn't provide a clear-cut choice to users for other default settings, such as to share their location in chats. It also ruled against clauses that allowed the social media giant to use information such as profile pictures for commercial, sponsored or related content.

Facebook told Reuters it will appeal the ruling, but also that it will make changes to comply with European Union privacy laws coming into effect in June.

Belgium

Facebook has been ordered to stop tracking people without consent, by a court in Belgium. The company has been told to delete all the data it had gathered on people who did not use Facebook. The court ruled the data was gathered illegally.

Belgium's privacy watchdog said the website had broken privacy laws by placing tracking code on third-party websites.

Facebook said it would appeal against the ruling.

The social network faces fines of 250,000 euros a day if it does not comply.

The ruling is the latest in a long-running dispute between the social network and the Belgian commission for the protection of privacy (CPP). In 2015, the CPP complained that Facebook tracked people when they visited pages on the site or clicked like or share, even if they were not members.

 

  Is internet censorship thriving under the radar in the UK?...

YouTube videos that are banned for UK eyes only


Link Here 17th February 2018
calais rant video uk censored video
  outside UK UK

The United Kingdom's reputation for online freedom has suffered significantly in recent years, in no small part due to the draconian Investigatory Powers Act, which came into power last year and created what many people have described as the worst surveillance state in the free world.

But despite this, the widely held perception is that the UK still allows relatively free access to the internet, even if they do insist on keeping records on what sites you are visiting. But how true, is this perception?

...

There is undeniably more online censorship in the UK than many people would like to admit to. But is this just the tip of the iceberg? The censorship of one YouTube video suggests that it might just be. The video in question contains footage filmed by a trucker of refugees trying to break into his vehicle in order to get across the English Channel and into the UK. This is a topic which has been widely reported in British media in the past, but in the wake of the Brexit vote and the removal of the so-called 'Jungle Refugee Camp', there has been strangely little coverage.

The video in question is entitled 'Epic Hungarian Trucker runs the Calais migrant gauntlet.' It is nearly 15 minutes long and does feature the drivers extremely forthright opinions about the refugees in question as well as some fairly blue language.

Yet, if you try to access this video in the UK, you will find that it is blocked. It remains accessible to users elsewhere in the world, albeit with content warnings in place.

And it is not alone. It doesn't take too much research to uncover several similar videos which are also censored in the UK. The scale of the issue likely requires further research. But it safe to say, that such censorship is both unnecessary and potentially illegal as it as undeniably denying British citizens access to content which would feed an informed debate on some crucial issues.

 

 Updated: Severe punishment...

Ofcom fines Al Arabiya News channel from the UAE for broadcasting a prison confession extracted via torture


Link Here 17th February 2018

al arabiya logoAl Arabiya News is an Arabic language news and current affairs channel licensed by Ofcom.

Mr Husain Abdulla complained to Ofcom on behalf of Mr Hassan Mashaima about unfair treatment and unwarranted infringement of privacy in connection with the obtaining of material included in the programme and the programme as broadcast on Al Arabiya News on 27 February 2016.

The programme reported on an attempt made in February and March 2011, by a number of people including the complainant, Mr Hassan Mashiama, to change the governing regime in Bahrain from a Kingdom to a Republic. It included an interview with Mr Mashaima, filmed while he was in prison awaiting a retrial, as he explained the circumstances which had led to his arrest and conviction.

The interview included Mr Mashaima making confessions as to his participation in certain activities. Only approximately three months prior to the date on which Al Arabiya News said the footage was filmed, an official Bahraini Commission of Inquiry had found that similar such confessions had been obtained from individuals, including Mr Mashaima, under torture. During Mr Mashaima's subsequent retrial and appeal, he maintained that his conviction should be overturned, as confessions had been obtained from him under torture.

Ofcom's Decision is that the appropriate sanction should be a financial penalty of £120,000 and that the Licensee should be directed to broadcast a statement of Ofcom's findings, on a date to be determined by Ofcom, and that it should be directed to refrain from broadcasting the material found in breach again.

Update: Closed

6th February 2018. See article from ofcom.org.uk

Ofcom has announced that Al Arabiya News Channel is no longer licensed by Ofcom and hence cannot broadcast to the UK. Presumably this is related to the recent Ofcom fine.

Update: Maybe another reason for the UK closure

17th February 2018.  See  article from menafn.com

Al Arabiya News Channel has surrendered with immediate effect its license with the U.K. broadcasting censor Ofcom, which received a complaint over the channel's involvement in covering the crime of hacking Qatar News Agency (QNA), British law firm Carter-Ruck said.

QNA had hired Carter-Ruck to submit a complaint at Ofcom against Al Arabiya and Sky News Arabia for broadcasting fabricated and false statements attributed to Emir Sheikh Tamim bin Hamad Al-Thani after QNA's website was hacked on May 24, 2017, The four countries of Saudi Arabia, UAE, Bahrain and Egypt used this event to justify the siege that they have been imposing on Qatar since June 5, 2017.

The surrendering of the license by Al Arabiya, a Dubai-based satellite broadcaster owned by Saudi businessmen, was to avoid an an Ofcom investigation.

QNA says Al Arabiya's decision was dictated by the inquiry but the channel says business reasons also influenced the move.

 

 Offsite Article: Forgive her Lord for she knows not what she does...


Link Here 17th February 2018
doj logo US Judge rules that embedding a tweet can be copyright infringement

See article from torrentfreak.com

 

  Calling for secretly funded press censorship...

Max Mosley launches legal action against several newspapers to delete coverage of his BDSM parties and his funding of the Impress press censor


Link Here 15th February 2018  full story: Max Mosley Privacy...Max Mosley, spanking and Nazi sex
impress 2016 logo The Daily Mail writes:

Max Mosley has launched a chilling new attack on Press freedom, with an extraordinary legal bid to scrub records of his notorious German-themed orgy from history.

The former Formula One boss also wants to restrict reporting on the £3.8million his family trust spends bankrolling the controversial Press regulator Impress.

He has taken legal action against a range of newspapers -- the Daily Mail, The Times, The Sun and at least one other national newspaper -- demanding they delete any references to his sadomasochistic sex party and never mention it again.

However, in a move that could have devastating consequences both for Press freedom and for historical records, Mr Mosley is now using data protection laws to try to force newspapers to erase any mention of it. He has also insisted that the newspapers stop making references to the fact he bankrolls Impress -- the highly controversial, state-approved Press regulator.

Yesterday, MPs warned against data protection laws being used to trample Press freedoms. Conservative MP Bill Cash said:

The freedom of the Press is paramount and it would be perverse to allow historical records to be removed on the basis of data protection. If data protection can be used to wipe out historical records, then the consequences would be dramatic.

John Whittingdale, a Tory former Culture Secretary, said:

Data protection is an important principle for the protection of citizens. However, it must not be used to restrict the freedom of the Press.

In his action, the multimillionaire racing tycoon claimed that the Daily Mail's owner, Associated Newspapers, had breached data protection principles in 34 articles published since 2013 -- including many opinion pieces defending the freedom of the Press. These principles are designed to stop companies from excessive processing of people's sensitive personal data or from holding on to people's details for longer than necessary, and come with exemptions for journalism that is in the public interest.

 

 Commented: Maybe more about asking why Google can't do the same...

The UK reveals a tool to detect uploads of jihadi videos


Link Here 15th February 2018  full story: Glorification of Censorship...Climate of fear caused by glorification of terrorsim

asi logoThe UK government has unveiled a tool it says can accurately detect jihadist content and block it from being viewed.

Home Secretary Amber Rudd told the BBC she would not rule out forcing technology companies to use it by law. Rudd is visiting the US to meet tech companies to discuss the idea, as well as other efforts to tackle extremism.

The government provided £600,000 of public funds towards the creation of the tool by an artificial intelligence company based in London.

Thousands of hours of content posted by the Islamic State group was run past the tool, in order to train it to automatically spot extremist material.

ASI Data Science said the software can be configured to detect 94% of IS video uploads. Anything the software identifies as potential IS material would be flagged up for a human decision to be taken.

The company said it typically flagged 0.005% of non-IS video uploads. But this figure is meaningless without an indication of how many contained any content that have any connection with jihadis.

In London, reporters were given an off-the-record briefing detailing how ASI's software worked, but were asked not to share its precise methodology. However, in simple terms, it is an algorithm that draws on characteristics typical of IS and its online activity.

It sounds like the tool is more about analysing data about the uploading account, geographical origin, time of day, name of poster etc rather than analysing the video itself.

Comment: Even extremist takedowns require accountability

15th February 2018. See  article from openrightsgroup.org

open rights group 2016 logo Can extremist material be identified at 99.99% certainty as Amber Rudd claims today? And how does she intend to ensure that there is legal accountability for content removal?

The Government is very keen to ensure that extremist material is removed from private platforms, like Facebook, Twitter and Youtube. It has urged use of machine learning and algorithmic identification by the companies, and threatened fines for failing to remove content swiftly.

Today Amber Rudd claims to have developed a tool to identify extremist content, based on a database of known material. Such tools can have a role to play in identifying unwanted material, but we need to understand that there are some important caveats to what these tools are doing, with implications about how they are used, particularly around accountability. We list these below.

Before we proceed, we should also recognise that this is often about computers (bots) posting vast volumes of material with a very small audience. Amber Rudd's new machine may then potentially clean some of it up. It is in many ways a propaganda battle between extremists claiming to be internet savvy and exaggerating their impact, while our own government claims that they are going to clean up the internet. Both sides benefit from the apparent conflict.

The real world impact of all this activity may not be as great as is being claimed. We should be given much more information about what exactly is being posted and removed. For instance the UK police remove over 100,000 pieces of extremist content by notice to companies: we currently get just this headline figure only. We know nothing more about these takedowns. They might have never been viewed, except by the police, or they might have been very influential.

The results of the government's' campaign to remove extremist material may be to push them towards more private or censor-proof platforms. That may impact the ability of the authorities to surveil criminals and to remove material in the future. We may regret chasing extremists off major platforms, where their activities are in full view and easily used to identify activity and actors.

Whatever the wisdom of proceeding down this path, we need to be worried about the unwanted consequences of machine takedowns. Firstly, we are pushing companies to be the judges of legal and illegal. Secondly, all systems make mistakes and require accountability for them; mistakes need to be minimised, but also rectified.

Here is our list of questions that need to be resolved.

1 What really is the accuracy of this system?

Small error rates translate into very large numbers of errors at scale. We see this with more general internet filters in the UK, where our blocked.org.uk project regularly uncovers and reports errors.

How are the accuracy rates determined? Is there any external review of its decisions?

The government appears to recognise the technology has limitations. In order to claim a high accuracy rate, they say at least 6% of extremist video content has to be missed. On large platforms that would be a great deal of material needing human review. The government's own tool shows the limitations of their prior demands that technology "solve" this problem.

Islamic extremists are operating rather like spammers when they post their material. Just like spammers, their techniques change to avoid filtering. The system will need constant updating to keep a given level of accuracy.

2 Machines are not determining meaning

Machines can only attempt to pattern match, with the assumption that content and form imply purpose and meaning. This explains how errors can occur, particularly in missing new material.

3 Context is everything

The same content can, in different circumstances, be legal or illegal. The law defines extremist material as promoting or glorifying terrorism. This is a vague concept. The same underlying material, with small changes, can become news, satire or commentary. Machines cannot easily determine the difference.

4 The learning is only as good as the underlying material

The underlying database is used to train machines to pattern match. Therefore the quality of the initial database is very important. It is unclear how the material in the database has been deemed illegal, but it is likely that these are police determinations rather than legal ones, meaning that inaccuracies or biases in police assumptions will be repeated in any machine learning.

5 Machines are making no legal judgment

The machines are not making a legal determination. This means a company's decision to act on what the machine says is absent of clear knowledge. At the very least, if material is "machine determined" to be illegal, the poster, and users who attempt to see the material, need to be told that a machine determination has been made.

6 Humans and courts need to be able to review complaints

Anyone who posts material must be able to get human review, and recourse to courts if necessary.

7 Whose decision is this exactly?

The government wants small companies to use the database to identify and remove material. If material is incorrectly removed, perhaps appealed, who is responsible for reviewing any mistake?

It may be too complicated for the small company. Since it is the database product making the mistake, the designers need to act to correct it so that it is less likely to be repeated elsewhere.

If the government want people to use their tool, there is a strong case that the government should review mistakes and ensure that there is an independent appeals process.

8 How do we know about errors?

Any takedown system tends towards overzealous takedowns. We hope the identification system is built for accuracy and prefers to miss material rather than remove the wrong things, however errors will often go unreported. There are strong incentives for legitimate posters of news, commentary, or satire to simply accept the removal of their content. To complain about a takedown would take serious nerve, given that you risk being flagged as a terrorist sympathiser, or perhaps having to enter formal legal proceedings.

We need a much stronger conversation about the accountability of these systems. So far, in every context, this is a question the government has ignored. If this is a fight for the rule of law and against tyranny, then we must not create arbitrary, unaccountable, extra-legal censorship systems.

 

 Extract: Flawed Social Media Law...

Human Rights Watch criticises the recent German internet censorship law that leaves social media companies with little choice but to take down any complained about posts without due consideration


Link Here 15th February 2018  full story: Internet Censorship in Germany...Germany considers state internet filtering

reverse netzdg The new German law that compels social media companies to remove hate speech and other illegal content can lead to unaccountable, overbroad censorship and should be promptly reversed, Human Rights Watch said today. The law sets a dangerous precedent for other governments looking to restrict speech online by forcing companies to censor on the government's behalf. Wenzel Michalski, Germany director at Human Rights Watch said:

Governments and the public have valid concerns about the proliferation of illegal or abusive content online, but the new German law is fundamentally flawed. It is vague, overbroad, and turns private companies into overzealous censors to avoid steep fines, leaving users with no judicial oversight or right to appeal.

Parliament approved the Network Enforcement Act , commonly known as NetzDG, on June 30, 2017, and it took full effect on January 1, 2018. The law requires large social media platforms, such as Facebook, Instagram, Twitter, and YouTube, to promptly remove "illegal content," as defined in 22 provisions of the criminal code , ranging widely from insult of public office to actual threats of violence. Faced with fines up to 50 million euro, companies are already removing content to comply with the law.

At least three countries -- Russia, Singapore, and the Philippines -- have directly cited the German law as a positive example as they contemplate or propose legislation to remove "illegal" content online. The Russian draft law, currently before the Duma, could apply to larger social media platforms as well as online messaging services.

Two key aspects of the law violate Germany's obligation to respect free speech, Human Rights Watch said. First, the law places the burden on companies that host third-party content to make difficult determinations of when user speech violates the law, under conditions that encourage suppression of arguably lawful speech. Even courts can find these determinations challenging, as they require a nuanced understanding of context, culture, and law. Faced with short review periods and the risk of steep fines, companies have little incentive to err on the side of free expression.

Second, the law fails to provide either judicial oversight or a judicial remedy should a cautious corporate decision violate a person's right to speak or access information. In this way, the largest platforms for online expression become "no accountability" zones, where government pressure to censor evades judicial scrutiny.

At the same time, social media companies operating in Germany and elsewhere have human rights responsibilities toward their users, and they should act to protect them from abuse by others, Human Rights Watch said. This includes stating in user agreements what content the company will prohibit, providing a mechanism to report objectionable content, investing adequate resources to conduct reviews with relevant regional and language expertise, and offering an appeals process for users who believe their content was improperly blocked or removed. Threats of violence, invasions of privacy, and severe harassment are often directed against women and minorities and can drive people off the internet or lead to physical attacks.

...Read the full article from hrw.org

 

  I am not a number! I am a surveillance database entry...

Thailand asks developers to speed up its 'Foreigner Database' that will record the entries and exits of all foreigners, and require them to report to local police every time they change hotel or address


Link Here 15th February 2018
surveillanceThailand's Immigration Bureau and the Interior Ministry have been instructed to speed up the implementation of a single-platform online database of foreigners entering and leaving the kingdom. The two agencies were told to have the new system fully functioning in six months.

The order was given by the Deputy leader of Thailand's military governement, General Prawit Wongsuwan.

The single platform database would enable the government to keep tabs on all foreigners so that they can be easily located by the police.

As part of the new system, the Immigration Bureau will cancel the use of the Immigration 6 form and instead use e-passport data. A spokesman said each immigration checkpoint would be equipped with identity-checking equipment, such as fingerprint readers and passport scanners, to enter information into the database.

At the same time, the Interior Ministry's Provincial Administration Department must ensure that all hotels, apartments, guesthouses and other accommodation services keep and report records of foreigners using their services by informing the nearest immigration office or police station, which will in turn feed the data to the database. Foreigners also now have to report  to the local police or immigration every time they change hotel or where they stay whilst in Thailand.

 

 Offsite Article: We already give up our privacy to use phones, why not with cars too?...


Link Here 15th February 2018
surveillance The future of transport looks like a sensor-riddled computer

See article from theregister.co.uk

 

 Updated: Images blocked by default...

Tumblr changes the way that its safe mode works, possibly related to impending UK porn censorship


Link Here 13th February 2018
tumblr is an image sharing website. It has just announced that it will changesafe mode wrks. In a email to wisers, tumblr writes: the way that its

tumblr logoHi

Last year we introduced Safe Mode, which filters sensitive content in your dashboard and search results so you have control over what you see and what you don't. And now that it's been out for a while, we want to make sure everyone has the chance to try it out.

Over the next couple weeks, you might see some things in your dashboard getting filtered. If you like it that way, that's great. If you don't, no problem. You can go back by turning off Safe Mode any time.

tumblr.

Update: Are the safe mode changes related to impending UK porn censorship?

13th February 2018. See  article from dazeddigital.com

Tumblr has long been one of the freest spaces on the internet for porn and sex-positive content, thanks to lax guidelines compared to Facebook or Instagram. Porn creators, fetish community artists, and more were able to share work with little trouble. Tumblr made a major change last year with the introduction of a Safe Mode that initially filtered NSFW content if users chose to enable it. Now though, Tumblr is making Safe Mode the default setting for users.

The Safe Mode feature hides sensitive images -- for example nude images, even, as Tumblr's guidelines note, if artistic or education nudity like classic art or anatomy. As Motherboard reports , it's a function that claims to give users more control over what you see and what you don't, updating the Safe Search option that the platform introduced back in 2012 that removed sensitive stuff from the site's search results.

Rolling out the default setting means users will have to go out of their way to switch back and see unfiltered content. An email sent to Tumblr users last week states that they want to make sure everyone has the chance to try it out.

Many adult content creators are concerned this will affect their work and space on the platform. Tumblr user, freelance artist, and adult comic-maker Kayla-Na told Dazed of her frustrations: I understand wanting to make Tumblr a safer environment for younger audiences, but Tumblr has to remember that the adult community is still part of the website as a whole, and shouldn't be suppressed into oblivion.

See full article from dazeddigital.com

Perhaps the Tumblr safe mode has also been introduced as a step towards the UK's porn censorship by age verification. The next step maybe for the safe mode to mandatorily imposed on internet viewers in Britain and can only be turned off when they subject themselves to age verification.

 

  Last chapter...

Politician calls for the end of the Irish book censor because it does not censor enough (or anything at all)


Link Here 13th February 2018  full story: Book Censorship in Ireland...Minister for censorship investigated by how own censor board

niamh smythIrish book censors have not banned a single magazine and have blocked just one book in the last ten years. Now a member of the Irish Parliament has called for the Censorship of Publications Board to be shut down.

Fianna Fail Arts and Culture Spokesperson Niamh Smyth said: This is one quango that should be whacked. She was referring to a political campaign slogan whack a quango, to shut down quangos. Smyth added:

The ongoing existence of a Censorship Board that doesn't censor anything is bringing the concept of censorship into disrepute at a time where we need it more than ever.

The only time the board has been heard of in ten years was the ludicrous submission of Alan Shatter's novel Laura over something to do with abortion.

 

  Sneaky tricks...

Instagram is trying to inform posters that their post has been saved, snapped or recorded before it self destructs


Link Here 13th February 2018  full story: Instagram Censorship...Photo hsaring website gets heavy on the censorship
instagram logoSome users have reported seeing pop ups in Instagram (IG) informing them that, from now on, Instagram will be flagging when you record or take a screenshot of other people's IG stories and informing the originator that you have rsnapped or ecorded the post.

According to a report by Tech Crunch , those who have been selected to participate in the IG trial can see exactly who has been creeping and snapping their stories. Those who have screenshotted an image or recorded a video will have a little camera shutter logo next to their usernames, much like Snapchat.

Of course, users have already found a nifty workaround to avoid social media stalking exposure. So here's the deal: turning your phone on airplane mode after you've loaded the story and then taking your screenshot means that users won't be notified of any impropriety (sounds easy for Instagram to fix this by saving the keypress until the next time it communicates with the Instagram server). You could also download the stories from Instagram's website or use an app like Story Reposter. Maybe PC users just need another small window on the desktop, then move the mouse pointer to the small window before snapping the display.

Clearly, there's concerns on Instagram's part about users' content being shared without their permission, but if the post is shared with someone for viewing, it is pretty tough to stop then from grabbing a copy for themselves as they view it.

 

  How Apple is Paving the Way to a Cloud Dictatorship in China...

So who'll trust the Chinese government with their cloud data?


Link Here 12th February 2018  full story: Mass snooping in China...Internet and phone snooping in China

global voices logo The US-based global tech giant Apple Inc. is set to hand over the operation of its iCloud data center in mainland China to a local corporation called Guizhou-Cloud Big Data (GCBD) by February 28, 2018. When this transition happens, the local company will become responsible for handling the legal and financial relationship between Apple and China's iCloud users. After the transition takes place, the role of Apple will restricted to an investment of US one billion dollars, for the construction of a data center in Guiyang, and for providing technical support to the center, in the interest of preserving data security.

GCBD was established in November 2014 with a RMB 235 million yuan [approximately US$ 37.5 million] registered capital investment. It is a state enterprise solely owned by Guizhou Big Data Development and Management Bureau. The company is also supervised by Guizhou Board of Supervisors of State-owned Enterprises.

What will happen to Apple's Chinese customers once iCloud services are handed over to GCBD? In public statements, Apple has avoided acknowledging the political implications of the move:

This will allow us to continue to improve the speed and reliability of iCloud in China and comply with Chinese regulations.

Apple Inc. has not explained the real issue, which is that a state-owned big data company controlled by the Chinese government will have access to all the data of its iCloud service users in China. This will allow the capricious state apparatus to jump into the cloud and look into the data of Apple's Chinese users.

Apple Inc. has not explained the real issue, which is that a state-owned big data company controlled by the Chinese government will have access to all the data of its iCloud service users in China.

Over the next few weeks, iCloud users in China will receive a notification from Apple, seeking their endorsement of the new service terms. These "iCloud (operated by GCBD) terms and conditions" have a newly added paragraph, which reads:

If you understand and agree, Apple and GCBD have the right to access your data stored on its servers. This includes permission sharing, exchange, and disclosure of all user data (including content) according to the application of the law.

In other words, once the agreement is signed, GCBD -- a company solely owned by the state -- would get a key that can access all iCloud user data in China, legally.

Apple's double standard

Why would a company that built its reputation on data security surrender to the Chinese government so easily?

I still remember how in February 2016, after the attack in San Bernardino, Apple CEO Tim Cook withstood pressure from the US Department of Justice to build an iPhone operating system that could circumvent security features and install it in the iPhone of the shooter. Cook even issued an open letter to defend the company's decision.

Apple's insistence on protecting user data won broad public support. At the same time, it was criticized by the Department of Justice , which retorted that the open letter "appears to be based on its concern for its business model and public brand marketing strategy."

This comment has proven true today, because it is clear that the company is operating on a double standard in its Chinese business. We could even say that it is bullying the good actor while being terrified by the bad one.

Apple Inc. and Tim Cook, who had once stayed firm against the US government, suddenly have become soft in front of Chinese government. Faced with the unreasonable demand put forward by the Chinese authorities, Apple has not demonstrated a will to resist. On the contrary, it is giving people the impression that it will do whatever needed to please the authorities.

Near the end of 2017, Apple lnc. admitted it had removed 674 VPN apps from Chinese App Store. These apps are often used by netizens for circumventing the Great Firewall (blocking of overseas websites and content). Skype also vanished from the Chinese App Store. And Apple's submission to the Chinese authorities' requests generated a feeling of "betrayal" among Chinese users.

Some of my friends from mainland China have even decided to give up using Apple mobile phones and shifted to other mainland Chinese brands. Their decision, in addition to the price, is mainly in reaction to Apple's decision to take down VPN apps from the Chinese Apple store.

Some of these VPN apps can still be downloaded from mobile phones that use the Android system. This indicates that Apple is not "forced" to comply. People suspect that it is proactively performing a "obedient" role.

The handover of China iCloud to GCBD is unquestionably a performance of submission and kowtow. Online, several people have quipped: "the Chinese government is asking for 50 cents, Apple gives her a dollar."

Selling the iPhone in China

Apple says the handover is due to new regulations that cloud servers must be operated by local corporation. But this is unconvincing. China's Cybersecurity Law, which was implemented on June 1 2017, does demand that user information and data collected in mainland China be stored within the border . But it does not require that the data center be operated by a local corporation.

In other words, even according to Article 37 of the Cybersecurity Law, Apple does not need to hand over the operation of iCloud services to a local corporation, to say nothing of the fact that the operator is solely owned by the state. Though Apple may have to follow the "Chinese logic" or "unspoken rule", the decision looks more like a strategic act, intended to insulate Apple from financial, legal and moral responsibility to their Chinese users, as stated in the new customer terms and conditions on the handover of operation. It only wants to continue making a profit by selling iPhone in China.

Many people have encountered similar difficulties when doing business in China -- they have to follow the authorities' demands. Some even think that it is inevitable and therefore reasonable. For example, Baidu's CEO Robin Li said in a recent interview with Time Magazine, "That's our way of doing business here".

I can see where Apple is coming from. China is now the third largest market for the iPhone. While confronting vicious competition from local brands, the future growth of iPhone in China has been threatened . And unlike in the US, if Apple does not submit to China and comply with the Cybersecurity Law, the Chinese authorities can use other regulations and laws like the Encryption Law of the People's Republic of China (drafting) and Measures for Security Assessment of Cross-border Data Transfer (drafting) to force Apple to yield.

However, as the world's biggest corporation in market value which has so many loyal fans, Apple's performance in China is still disappointing. It has not even tried to resist. On the contrary, it has proactively assisted [Chinese authorities] in selling out its users' private data.

Assisting in the making of a 'Cloud Dictatorship'

This is perhaps the best result that China's party-state apparatus could hope for. In recent years, China has come to see big data as a strategic resource for its diplomacy and for maintaining domestic stability. Big data is as important as military strength and ideological control. There is even a new political term "Data-in-Party-control" coming into use.

As an Apple fans, I lament the fact that Apple has become a key multinational corporation offering its support to the Chinese Communist Party's engineering of a "Cloud Dictatorship". It serves as a very bad role model: Now Apple that has kowtowed to the CCP, how long will other tech companies like Facebook, Google and Amazon be able to resist the pressure?

 

 Offsite Article: Fake news has a long history....


Link Here 12th February 2018  full story: Fake News...Declining respect for the authorities is blamed on 'fake' news
charles ii Beware the state being keeper of the truth'. By Kenan Malik

See article from theguardian.com

 

 Offsite Article: Why should I use DuckDuckGo instead of Google?...


Link Here 12th February 2018
duckduckgo logo Promotional material but nevertheless makes a few good points

See article from quora.com

 

  Fake blame...

Matt Hancock rules out creating a UK social media censor


Link Here 10th February 2018
matt hancockThe UK's digital and culture secretary, Matt Hancock, has ruled out creating a new internet censor targeting social media such as Facebook and Twitter.

In an interview on the BBC's Media Show , Hancock said he was not inclined in that direction and instead wanted to ensure existing regulation is fit for purpose. He said:

If you tried to bring in a new regulator you'd end up having to regulate everything. But that doesn't mean that we don't need to make sure that the regulations ensure that markets work properly and people are protected.

Meanwhile the Electoral Commission and the Department for Digital, Culture, Media and Sport select committee are now investigating whether Russian groups used the platforms to interfere in the Brexit referendum in 2016. The DCMS select committee is in the US this week to grill tech executives about their role in spreading fake news. In a committee hearing in Washington yesterday, YouTube's policy chief said the site had found no evidence of Russian-linked accounts purchasing ads to interfere in the Brexit referendum.

 

 Offsite Article: Gagging orders: The internet surveillance nobody can talk about...


Link Here 10th February 2018  full story: Snooper's Charter Plus...2015 Cameron government expands the Snooper's Charter
stasi uk The Investigatory Powers Act has heralded a new era of secret state surveillance

See article from alphr.com

 

 Updated: Lost Direction...

Australian censors ban Japanese console game Omega Labyrinth Z


Link Here 9th February 2018  full story: Banned Games in Australia...Adult games ban
Omega Labyrinth Z - Standard Edition The Australian Censorship has banned a Japanese console game titled Omega Labyrinth Z .

The game was released in Japan last year and doesn't seem to have stirred any controversy. It is set for a European release on 30th June 2018.

The Australian censor has not yet published any meaningful reason for the ban but it is probably related to the depiction of young characters mixed with sexy themes.

Update: Simulated stimulation

9th February 2018. See  article from kotaku.com.au

kotaku.com.au has managed to get hold of the Australian censor's reasoning behind its ban of  Omega Labyrinth Z . The censors write:

The game features a variety of female characters with their cleavages emphasised by their overtly provocative clothing, which often reveal the sides or underside of theiur breasts and obscured genital region. Multiple female characters are also depicted fully nude, with genitals obscured by objects and streams of light throughout the game. Although of indeterminate age, most of these characters are adult-like, with voluptuous bosoms and large cleavages that are flaunted with a variety of skimpy outfits.

One character, Urara Rurikawa, is clearly depicted as child-like in comparison with the other female characters. She is flat-chested, physically underdeveloped (particularly visible in her hip region) and is significantly shorter than otehr characters in the game. She also has a child-like voice, wears a school uniform-esque outfit and appears naive in her outlook on life.

At one point in the game, Urara Rurikawa and a friend are referred to as "the younger girls" by one of the game's main characters. In the Boards opinion, the character of Urara Rurikawa is a depiction of a person who is, or appears to be, a child under 18 years.

In some gameplay modes, including the "awakening" mode, the player is able to touch the breasts, buttocks, mouths and genital regions of each character, including Urara Rurikawa, while they are in sexualised poses, receiving positive verbal feedback for interactions which are implied to be pleasurable for the characters and negative verbal feedback, including lines of dialogue such as "I-It doesn't feel good..." and "Hyah? Don't touch there!," for interactions which are implied to be unpleasurable, implying a potential lack of consent.

The aim of these sections is, implicity, to sexually arouse these characters to the point that a "shame break" is activated, in which some of the characters clothing is removed - with genital regions obscured by light and various objects - and the background changes colour as they implicitly orgasm.

In one "awakening" mode scenario, thee player interacts with Urara Rurikawa, who is depicted lying down, clutching a teddy bear, with lines of dialogue such as "I'm turning sleepy...", "I'm so sleepy now..." and "I might wake up..." implying that she is drifting in and out of sleep.

The player interacts with this child-like character in the same manner as they interact with adult characters, clicking her breasts, buttocks, mouth and genital regions until the "shame break" mode is activated. During this section of the game, with mis-clicks, dialogue can be triggered, in which Urara Rurikawa says, "Stop tickling...", "Stop poking..." and "Th-that feels strange...", implying a lack of consent.

...

In the Board's opinion, the ability to interact with the character Urara Rurikawa in the manner described above constituted a simulation of sexual stimulation of a child.

 

  The CLOUD Act...

A Dangerous Expansion of US Police Snooping on Cross-Border Data


Link Here 9th February 2018  full story: Internet Snooping in the US...Snooping continues after Snowden revelations

US SenateThis week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world.

The Clarifying Overseas Use of Data ( CLOUD ) Act expands American and foreign law enforcement's ability to target and access people's data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to federal agents in Immigration and Customs Enforcement) to access "the contents of a wire or electronic communication and any record or other information" about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider--like Google, Facebook, or Snapchat--to hand over a user's content and metadata, even if it is stored in a foreign country, without following that foreign country's privacy laws.

Second, the bill would allow the President to enter into "executive agreements" with foreign governments that would allow each government to acquire users' data stored in the other country, without following each other's privacy laws.

For example, because U.S.-based companies host and carry much of the world's Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is an enormous erosion of current data privacy laws.

This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the United States. The case, United States v. Microsoft (often called "Microsoft Ireland"), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law.

Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act , which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed unanimous support in the House for the past two years .

The CLOUD Act and the US-UK Agreement

The CLOUD Act's proposed language is not new. In 2016, the Department of Justice first proposed legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post broke the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In 2017 , the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States.

In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter to Congress opposing the Justice Department's revamped bill.

The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ's 2017 bill. None of EFF's concerns have been addressed. The legislation still:

  • Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.

  • Fails to require foreign law enforcement to seek individualized and prior judicial review.

  • Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.

  • Fails to place adequate limits on the category and severity of crimes for this type of agreement.

  • Fails to require notice on any level -- to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)

The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations. But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the Stored Communications Act protects all members of the "public" from the unlawful disclosure of their personal communications.

An Expansion of U.S. Law Enforcement Capabilities

The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information -- meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders, including data stored in the United States.

EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court amicus brief in the Microsoft Ireland case.

When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government's laws. To do so, the company must object within 14 days, and undergo a complex "comity" analysis -- a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments.

Failure to Support Mutual Assistance

Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs). This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation.

It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment's warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the data privacy rules where the data is stored, which may include important " necessary and proportionate " standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries.

While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining.

The CLOUD Act raises dire implications for the international community, especially as the Council of Europe is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced legislation that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system.

A growing chorus of privacy groups in the United States opposes the CLOUD Act's broad expansion of U.S. and foreign law enforcement's unilateral powers over cross-border data. For example, Sharon Bradford Franklin of OTI (and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities "in the wrong direction, by sacrificing digital rights." CDT and Access Now also oppose the bill.

Sadly, some major U.S. technology companies and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a " good start ." Nor does it do a " remarkable job of balancing these interests in ways that promise long-term gains in both privacy and security." Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law enforcement and U.S. technology companies.

Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people's privacy. EFF strongly opposes the bill. Now is the time to strengthen the MLAT system, not undermine it.

 

 Offsite Article: The real consequences of fake porn and news...


Link Here 9th February 2018
tech crunch logo Fake celebrity porn has been a bit of fun, but what about the wider issue of being able to easily fake videos. Perhaps 'evidence' supporting #MeToo accusations, or a bit of fun with Donald Trump in Moscow.

See article from techcrunch.com

 

  Progressive language...

US basic cable TV channels get more adventurous about strong language


Link Here 8th February 2018
magicians syfyUS network TV is very strict about strong language and the basic cables channels have generally followed suit. However some of the more late night programming on basic cable has started to care less and less about tiptoeing around language.

In fact, SyFy and USA, both networks owned by NBC Universal, are now throwing caution to the wind and have been letting fly with 'fuck' since earlier this year.

Previously, swearing on SyFy and USA stuck to the guidelines laid out by the Federal Communications Commission, but as a basic cable channel, their Standards and Practices division was not actually beholden to follow those rules strictly. In fact the only thing holding back basic cable networks from using what is considered to be more vulgar language is their advertisers who traditionally don't like it.

To keep things clean, they usually dip the audio of either the f or the k whenever fuck is said in an episode. But according to Buzzfeed, USA and SyFy have worked that all out because their stance now is when language 'fuck' specifically is deemed important to the style or plot of a show, Syfy and USA now allow it. Such language results in a TV-MA rating so audiences know it's intended for mature audiences only.

However, basic cable channels have started to push the envelope. The word shit has been thrown around a lot more on networks like FX, AMC, and Comedy Central. The latter was even the first to bring uncensored usage of fuck to basic cable by creating their late night programming block called The Secret Stash, which began with the airing of the R-rated film adaptation South Park: Bigger, Longer & Uncut. They don't have that block anymore, but their late night programming still airs the uncensored versions of movies and stand-up specials.

Fans of The Magicians on SyFy might have already noticed this change. Ever since the third season premiered on SyFy back in January, they've been dropping f-bombs uncensored.

Now doubt the US moralist campaigners will be reaching for their mageaphones.

 

 Extract: Hate speech thrives underground...

The EU is failing to engage with platforms where the most hateful and egregious terrorist content lives.


Link Here 8th February 2018  full story: Internet Censorship in EU...EU proposes mandatory cleanfeed for all member states

EU flagIllegal content and terrorist propaganda are still spreading rapidly online in the European Union -- just not on mainstream platforms, new analysis shows.

Twitter, Google and Facebook all play by EU rules when it comes to illegal content, namely hate speech and terrorist propaganda, policing their sites voluntarily.

But with increased scrutiny on mainstream sites, alt-right and terrorist sympathizers are flocking to niche platforms where illegal content is shared freely, security experts and anti-extremism activists say.

See  article from politico.eu

 

  Smartphone data tracking is more than creepy...

Here's why you should be worried


Link Here 8th February 2018

the conversation logoSmartphones rule our lives. Having information at our fingertips is the height of convenience. They tell us all sorts of things, but the information we see and receive on our smartphones is just a fraction of the data they generate. By tracking and monitoring our behaviour and activities, smartphones build a digital profile of shockingly intimate information about our personal lives.

These records aren’t just a log of our activities. The digital profiles they create are traded between companies and used to make inferences and decisions that affect the opportunities open to us and our lives. What’s more, this typically happens without our knowledge, consent or control.

New and sophisticated methods built into smartphones make it easy to track and monitor our behaviour. A vast amount of information can be collected from our smartphones, both when being actively used and while running in the background. This information can include our location, internet search history, communications, social media activity, finances and biometric data such as fingerprints or facial features. It can also include metadata – information about the data – such as the time and recipient of a text message.

email connections

Your emails can reveal your social network. David Glance

Each type of data can reveal something about our interests and preferences, views, hobbies and social interactions. For example, a study conducted by MIT demonstrated how email metadata can be used to map our lives , showing the changing dynamics of our professional and personal networks. This data can be used to infer personal information including a person’s background, religion or beliefs, political views, sexual orientation and gender identity, social connections, or health. For example, it is possible to deduce our specific health conditions simply by connecting the dots between a series of phone calls.

Different types of data can be consolidated and linked to build a comprehensive profile of us. Companies that buy and sell data – data brokers – already do this. They collect and combine billions of data elements about people to make inferences about them. These inferences may seem innocuous but can reveal sensitive information such as ethnicity, income levels, educational attainment, marital status, and family composition.

A recent study found that seven in ten smartphone apps share data with third-party tracking companies like Google Analytics. Data from numerous apps can be linked within a smartphone to build this more detailed picture of us, even if permissions for individual apps are granted separately. Effectively, smartphones can be converted into surveillance devices.

The result is the creation and amalgamation of digital footprints that provide in-depth knowledge about your life. The most obvious reason for companies collecting information about individuals is for profit, to deliver targeted advertising and personalised services. Some targeted ads, while perhaps creepy, aren’t necessarily a problem, such as an ad for the new trainers you have been eyeing up.

ad targetting

Payday load ads. Upturn , CC BY

But targeted advertising based on our smartphone data can have real impacts on livelihoods and well-being, beyond influencing purchasing habits. For example, people in financial difficulty might be targeted for ads for payday loans . They might use these loans to pay for unexpected expenses , such as medical bills, car maintenance or court fees, but could also rely on them for recurring living costs such as rent and utility bills. People in financially vulnerable situations can then become trapped in spiralling debt as they struggle to repay loans due to the high cost of credit.

Targeted advertising can also enable companies to discriminate against people and deny them an equal chance of accessing basic human rights, such as housing and employment. Race is not explicitly included in Facebook’s basic profile information, but a user’s “ethnic affinity” can be worked out based on pages they have liked or engaged with. Investigative journalists from ProPublica found that it is possible to exclude those who match certain ethnic affinities from housing ads , and certain age groups from job ads .

This is different to traditional advertising in print and broadcast media, which although targeted is not exclusive. Anyone can still buy a copy of a newspaper, even if they are not the typical reader. Targeted online advertising can completely exclude some people from information without them ever knowing. This is a particular problem because the internet, and social media especially, is now such a common source of information.

Social media data can also be used to calculate creditworthiness , despite its dubious relevance. Indicators such as the level of sophistication in a user’s language on social media, and their friends’ loan repayment histories can now be used for credit checks. This can have a direct impact on the fees and interest rates charged on loans, the ability to buy a house, and even employment prospects .

There’s a similar risk with payment and shopping apps. In China, the government has announced plans to combine data about personal expenditure with official records, such as tax returns and driving offences. This initiative, which is being led by both the government and companies, is currently in the pilot stage . When fully operational, it will produce a social credit score that rates an individual citizen’s trustworthiness. These ratings can then be used to issue rewards or penalties, such as privileges in loan applications or limits on career progression.

The ConversationThese possibilities are not distant or hypothetical – they exist now. Smartphones are effectively surveillance devices , and everyone who uses them is exposed to these risks. What’s more, it is impossible to anticipate and detect the full range of ways smartphone data is collected and used, and to demonstrate the full scale of its impact. What we know could be just the beginning.

Vivian Ng , Senior Research Officer, Human Rights Centre, University of Essex, University of Essex and Catherine Kent , Project Officer, Human Rights Centre, University of Essex

This article was originally published on The Conversation . Read the original article .

 

 Offsite Article: Facebook moderator: I had to be prepared to see anything...


Link Here 8th February 2018  full story: Facebook Censorship...Facebook quick to censor
Facebook logo It's mostly pornography, says Sarah Katz, recalling her eight-month stint working as a Facebook moderator

See article from bbc.com

 

  Unacceptable...

Government outlines next steps to make the UK the most censored place to be online


Link Here 7th February 2018

government unacceptableGovernment outlines next steps to make the UK the safest place to be online

The Prime Minister has announced plans to review laws and make sure that what is illegal offline is illegal online as the Government marks Safer Internet Day.

The Law Commission will launch a review of current legislation on offensive online communications to ensure that laws are up to date with technology.

As set out in the Internet Safety Strategy Green Paper , the Government is clear that abusive and threatening behaviour online is totally unacceptable. This work will determine whether laws are effective enough in ensuring parity between the treatment of offensive behaviour that happens offline and online.

The Prime Minister has also announced:

  • That the Government will introduce a comprehensive new social media code of practice this year, setting out clearly the minimum expectations on social media companies

  • The introduction of an annual internet safety transparency report - providing UK data on offensive online content and what action is being taken to remove it.

Other announcements made today by Secretary of State for Digital, Culture, Media and Sport (DCMS) Matt Hancock include:

  • A new online safety guide for those working with children, including school leaders and teachers, to prepare young people for digital life

  • A commitment from major online platforms including Google, Facebook and Twitter to put in place specific support during election campaigns to ensure abusive content can be dealt with quickly -- and that they will provide advice and guidance to Parliamentary candidates on how to remain safe and secure online

DCMS Secretary of State Matt Hancock said:

We want to make the UK the safest place in the world to be online and having listened to the views of parents, communities and industry, we are delivering on the ambitions set out in our Internet Safety Strategy.

Not only are we seeing if the law needs updating to better tackle online harms, we are moving forward with our plans for online platforms to have tailored protections in place - giving the UK public standards of internet safety unparalleled anywhere else in the world.

Law Commissioner Professor David Ormerod QC said:

There are laws in place to stop abuse but we've moved on from the age of green ink and poison pens. The digital world throws up new questions and we need to make sure that the law is robust and flexible enough to answer them.

If we are to be safe both on and off line, the criminal law must offer appropriate protection in both spaces. By studying the law and identifying any problems we can give government the full picture as it works to make the UK the safest place to be online.

The latest announcements follow the publication of the Government's Internet Safety Strategy Green Paper last year which outlined plans for a social media code of practice. The aim is to prevent abusive behaviour online, introduce more effective reporting mechanisms to tackle bullying or harmful content, and give better guidance for users to identify and report illegal content. The Government will be outlining further steps on the strategy, including more detail on the code of practice and transparency reports, in the spring.

To support this work, people working with children including teachers and school leaders will be given a new guide for online safety, to help educate young people in safe internet use. Developed by the UK Council for Child Internet Safety ( UKCCIS , the toolkit describes the knowledge and skills for staying safe online that children and young people should have at different stages of their lives.

Major online platforms including Google, Facebook and Twitter have also agreed to take forward a recommendation from the Committee on Standards in Public Life (CSPL) to provide specific support for Parliamentary candidates so that they can remain safe and secure while on these sites. during election campaigns. These are important steps in safeguarding the free and open elections which are a key part of our democracy.

Notes

Included in the Law Commission's scope for their review will be the Malicious Communications Act and the Communications Act. It will consider whether difficult concepts need to be reconsidered in the light of technological change - for example, whether the definition of who a 'sender' is needs to be updated.

The Government will bring forward an Annual Internet Safety Transparency report, as proposed in our Internet Safety Strategy green paper. The reporting will show:

  • the amount of harmful content reported to companies

  • the volume and proportion of this material that is taken down

  • how social media companies are handling and responding to complaints

  • how each online platform moderates harmful and abusive behaviour and the policies they have in place to tackle it.

Annual reporting will help to set baselines against which to benchmark companies' progress, and encourage the sharing of best practice between companies.

The new social media code of practice will outline standards and norms expected from online platforms. It will cover:

  • The development, enforcement and review of robust community guidelines for the content uploaded by users and their conduct online

  • The prevention of abusive behaviour online and the misuse of social media platforms -- including action to identify and stop users who are persistently abusing services

  • The reporting mechanisms that companies have in place for inappropriate, bullying and harmful content, and ensuring they have clear policies and performance metrics for taking this content down

  • The guidance social media companies offer to help users identify illegal content and contact online, and advise them on how to report it to the authorities, to ensure this is as clear as possible

  • The policies and practices companies apply around privacy issues.

Comment: Preventing protest

7th February 2018. See  article from indexoncensorship.org

Index on Censorship logoThe UK Prime Minister's proposals for possible new laws to stop intimidation against politicians have the potential to prevent legal protests and free speech that are at the core of our democracy, says Index on Censorship. One hundred years after the suffragette demonstrations won the right for women to have the vote for the first time, a law that potentially silences angry voices calling for change would be a retrograde step.

No one should be threatened with violence, or subjected to violence, for doing their job, said Index chief executive Jodie Ginsberg. However, the UK already has a host of laws dealing with harassment of individuals both off and online that cover the kind of abuse politicians receive on social media and elsewhere. A loosely defined offence of 'intimidation' could cover a raft of perfectly legitimate criticism of political candidates and politicians -- including public protest.

 

  Banning lucky bags...

Germany and Sweden to consider banning loot boxes from video games played by children


Link Here 7th February 2018  full story: Loot boxes in video games...Worldwide action against monetisation of video games
commission for the protection of minors in the media kjm logoGermany is looking into imposing restrictions on loot boxes in videogames, according to Welt. A study by the University of Hamburg has found that elements of gambling are becoming increasingly common in videogames. It's an important part of the game industry's business model, but the chairman of the Youth Protection Commission of the State Media Authorities warned that it may violate laws against promoting gambling to children and adolescents.

The Youth Protection Commission will render its decision on loot boxes in March.

Sweden flagUpdate: Sweden too

9th February 2018.  See  article from neoseeker.com

Ardalan Shekarabi, the nation's minister of civil affairs, is concerned about making sure Swedish consumer protection laws apply across the board when it comes to gaming. Shekrabi admits that loot boxes are like gambling, but has asked Swedish authorities to consider whether that's what they should actually be classified as. The idea is to have legislation ready by January of next year to ensure Swedish gamers don't have to worry about a transaction falling outside of the nation's consumer protection laws in the event something goes south.

 

  A new YouTube badge will mark state propaganda...

YouTube announces that it will indicate when news videos are from state funded sources


Link Here 6th February 2018  full story: Fake News...Declining respect for the authorities is blamed on 'fake' news

youtube propaganda badge 2018Greater transparency for users around news broadcasters

Today we will start rolling out notices below videos uploaded by news broadcasters that receive some level of government or public funding.

Our goal is to equip users with additional information to help them better understand the sources of news content that they choose to watch on YouTube.

We're rolling out this feature to viewers in the U.S. for now, and we don't expect it to be perfect. Users and publishers can give us feedback through the send feedback form. We plan to improve and expand the feature over time.

The notice will appear below the video, but above the video's title, and include a link to Wikipedia so viewers can learn more about the news broadcaster.

 

  Police harassment...

As always, the police happily wade in to act as bullies on behalf of a complainant, this time a council that does not like to be criticised on Facebook


Link Here 6th February 2018
police suffolkA former councillor who used social media to criticise local government spending was visited at home by police officers.

Tony Boxford was stunned to see uniformed officers outside his house and accused Suffolk Police of wasting valuable time and resources .

It's ridiculous, he said. They don't have the resources to deal with traffic issues or parking problems yet they have time to come and knock on people's doors on behalf of the council.

A second man received a similar visit from the police after making critical comments about the town council's clerk on social media and at a private Christmas party.

Boxford had made fairly benign remarks in a blog post questioning whether Hadleigh Town Council was acting in constituents' best interests and criticising the actions of its clerk. On Facebook, he had criticised the town's former mayor and attacked the council for allegedly spending taxpayers' money on maintaining the guildhall rather than for the benefit of local people.

Boxford said police could not tell him specifically what he had said or written to warrant the visit.

A spokesman for Suffolk Police said:

Concerns were raised with police that some comments had been made regarding a member of the town council which they believed to be derogatory in nature - this included posts on social media.

Two individuals have subsequently been spoken to by officers and offered words of advice regarding these comments and in particular the appropriate use of social media.

The police said nothing about admonishing the complainant for wasting police time, nor about its own actions undermining respect for the law.

 

 Offsite Article: Russia's Internet Censorship on the rise...


Link Here 6th February 2018  full story: Internet Censorship in Russia...Russia restoring repressive state control of media
agora logo A new report has revealed the escalating internet censorship situation in Russia, with a record number of cases being reported in the past year.

See article from vpncompare.co.uk

 

 Offsite Article: When censorship is left to commercial interests that are not interested...


Link Here 5th February 2018
Electronic Frontier Foundation Private Censorship Is Not the Best Way to Fight Hate or Defend Democracy: Here Are Some Better Ideas. By the EFF

See article from eff.org

 

  Taking a bow...

Royal Court theatre in London pulls drama about Tibet for fear of offending China


Link Here 4th February 2018  full story: China International Censors...China pressures other countries into censorship
pah laA stage drama about Tibet has been pulled by the Royal Court Theatre for fear offending China.

Abhishek Majumdar said his play Pah-la was shelved because of fears over an arts programme in Beijing. His play deals with life in contemporary Tibet and draws on personal stories of Tibetans he worked with in India,

The London theatre, once known for its groundbreaking international productions, is facing questions after Abhishek Majumdar revealed a copy of the poster for the play Pah-la , bearing the imprints of the Arts Council and the Royal Court along with text suggesting that it was due to run for a month last autumn.

Majumdar claimed the play was withdrawn because of fears over the possible impact on an arts programme in Beijing, where Chinese writers are working with the publicly funded theatre and British Council.

The play was in development for three years and rehearsals had been fixed, according to Majumdar, who claimed that the British Council had pressurised the theatre to withdraw it because of sensitivities relating to the writing programme.

The Royal Court said it had had to postpone and then withdraw Pah-la for financial reasons last year, after it had been in development for three years, and that it was now committed to producing the play in spring 2019 in the light of recent events. It added:

The Royal Court always seeks to protect and not to silence any voice. [...BUT...] In an international context, this can sometimes be more complex across communities. The Royal Court is committed to protecting free speech, sometimes within difficult situations.

 


Censor Watch logo
censorwatch.co.uk
 

Top

Home

Links
 

Censorship News Latest

Daily UK Ratings from the BBFC

Melon Farmers