World-first online safety hub tackles cyberbullying and harmful online content

7 News Report

A world-first one-stop-shop has been launched for Australian parents worried about the content their children might be confronted with online.

The new portal will provide up to date resources and reporting tools to keep the whole family informed.

You can find the National Online Safety Hub here.

https://www.facebook.com/7NEWSsydney/videos/439965456872937/?v=439965456872937

Experts warn not to dismiss Joker-related threats as fears emerge film could inspire anti-women extremism

Annabel Hennessy – The West Australian

Violent online threats being inspired by the new Joker movie should not be dismissed as trolling, according to online experts and anti-abuse campaigners who are worried the blockbuster could fuel anti-women extremists.

It comes as police in NSW are running patrols in Randwick, in Sydney’s east, after a threat was posted on the notorious internet forum 4Chan appearing to warn of a potential attack at a screening of the film at a popular cinema in the suburb.

In the US, security around cinemas has also been beefed up after fears the film’s graphic portrayal of a social outcast who commits violent crimes after being sexually rejected could inspire copy-cat attacks.

The movie, starring Joaquin Phoenix and directed by Todd Phillips, has been likened to Martin Scorsese’s Taxi Driver starring Robert De Niro.

Cyber safety expert Ross Bark said he was worried about parents allowing children to see the film thinking it would be similar to a more typical superhero flick.

“I don’t think anyone under the age of 18 should be seeing this film,” he said.

Curtin University senior lecturer in Literary and Cultural Studies Dr Christina Lee said it would go a long way for the cast and crew of the film to openly condemn violence and talk about how The Joker taps into the current climate of extreme divisiveness.

“ The film is a fictional representation of a comic book supervillain … not as an instructional video,” Dr Lee said.

“This, however, won’t stop certain people who identify as incels … using the movie as propaganda.”

Read the full article here

Influencers out of business as Instagram removes ‘likes’ to tackle mental health impact

Annabel Hennessy – The West Australian

Influencers who falsely inflate their popularity could be put out of business by Instagram’s decision to “hide” likes, according to social media experts.

It comes as the Mark Zuckerberg-owned app has been accused of rolling out the new feature simply as a marketing gimmick rather than a genuine attempt to address its impact on mental health.

Instagram yesterday announced Australian users would no longer see the number of “likes” a post receives, claiming they wanted to “take the competition out of posting”.

The West Australian can also reveal Instagram’s algorithm, which promotes posts with more likes to the top of the feed, will remain the same. Dan Anisse, the vice-president of product at InfluencerDB, said the announcement was bad news for professional Instagrammers who scored brand deals after buying likes.

Social media expert Ross Bark, whose company Best Enemies runs cyber safety workshops in schools, said Instagram would also change its algorithm if it was genuinely concerned about mental health.

“It’s absolutely a business decision to try and get people to post more,” Mr Bark said. “They’re not changing the algorithm so you’ve still got that herd mentality of ‘it’s a competition’.”

Read the full article here

Are Facebook and other social sites doing enough to stop hate online?

As Facebook announced it was purging the profiles of Louis Farrakhan, Milo Yiannopoulos, InfoWars and others from its platforms as they were designated as ‘dangerous’, the question should be asked why they didn’t remove these accounts at the time they were determined to have be in violation of Facebook’s rules, rather than in one big announcement.

Or was this designed to generate positive publicity for Facebook, given their history of slow action and increasing public pressure?

Facebook recently announced that “Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and white separatism.”

However, seven weeks after the Christchurch mosque attack, parts of the livestream video of the attack as of Thursday were still available on Facebook and Instagram, with CNN Business reporting yesterday that it had obtained nine versions of the livestream from Eric Feinberg of the Global Intellectual Property Enforcement Center, which tracks online terror-related content.

Feinberg said he has identified 20 copies of the video live on the platforms in recent weeks and Facebook’s policy director Brian Fisherman reportedly told the United States congress that the company’s livestream algorithm didn’t detect the massacre because there wasn’t “enough gore”.

Clearly in this case their algorithms need further enhancement and while Facebook, Twitter and YouTube employ tens of thousands of content moderators, whose job is in part to enforce their codes of conduct by removing statements, videos and images that don’t comply, the process continues to be reactive and slow.

Further the codes of conduct across various providers e.g. between Google and Facebook are not consistent and the way in which they enforce and manage hate speech online are not the same.

Three weeks ago, the U.K. government released a detailed proposal for new internet laws that would dramatically reshape the ways in which social media companies like Facebook operate. While the proposal remains preliminary, the plan includes setting up an independent social media regulator and giving the U.K. government sweeping powers to fine tech companies for hosting content like violent videos, hate speech, misinformation and more, and as with Australia’s new Criminal Code Amendment that was passed earlier last month, social media executives like Zuckerberg could even be held personally responsible if their platforms fail to enforce the plans.

Australia’s new laws mean that platforms anywhere around the world are required to notify the Australian Federal Police (AFP) of any “abhorrent violent conduct” being streamed once they become aware of it. Failing to notify the AFP can result in fines of up to $168,00 for an individual or $840,000 for corporations. It also makes it a criminal offence for platforms not to remove abhorrent violent material “expeditiously”.

In the modern world social media has become fundamental to how we communicate. It is estimated that global social media users have reached over 3 billion in number and with around 2 billion active Facebook accounts these challenges and the usage of social media is only going to grow.

The issue is how do we control hate speech and incitement to violence on social media across all mainstream platforms?

Ultimately, in the world of social media, legislation is unable to quickly adapt to the ever-changing conditions of online publication and distribution of content. The answer I believe is not to rely on the companies themselves to do all the work to manage this and what’s needed urgently is a model of regulation that can deal with the myriad of challenges the world of social media creates.

While individual country laws and proposals will help, I believe that the best way to ensure genuine protections and a transparent approach is an independent regulator, as is used to promote accountability and ethical standards in the traditional print media.

Voluntary regulation at the industry level, which includes the adoption of a universal code of conduct and the creation of a body that will ensure its application, provides a much more effective system to address these challenges and I think will also help in driving improvements in technology to more effectively enforce and manage the removal of hate speech, sexual and violent online content across all social platforms.

Christchurch mosque shootings: Social media giant Facebook recommends users search for New Zealand’s pain

Annabel Hennessy – The West Australian

Facebook is encouraging users to search its platforms for the horrific Christchurch shooting video through its recommended keywords.

Social media experts have attacked the Silicon Valley giant for including “search suggestions” of “Christchurch live stream”, “Christchurch shooting footage” and “Christchurch video” when users type “Christchurch” into the site.

Cyber expert Ross Bark, whose company Best Enemies runs internet safety programs in Australian schools, said he was concerned Facebook’s algorithms were contributing to the self-radicalisation of extremists.

“Facebook might say they’ve removed at least one and a half million videos of the attack, but I think from a search perspective that actually needs to be more controlled,” he said.

“I don’t think they’re doing enough in terms of managing that. They need to tighten up in terms of their algorithms.”

Read the full article here

How safe is your children’s data online?

The TODAY Show – Channel 9

How much information are your children sharing online about themselves? It might seem like they live private lives, but in reality, just the simple use of an app can lead to their data being exposed.

On average, more than seventy-two million pieces of online data are collected from your child before the age of thirteen.

Watch Ross Bark’s interview on the Channel 9 TODAY Show to see how we all need to be careful about the information we share online.

 

Teenagers as young as 14 are taking drastic measures to stay or get thin according to an Australian government study.

SBS News

Australian teenagers as young as 14 are taking extreme measures including vomiting or taking laxatives to control their weight.

A new government study found, while a very small minority of mid-adolescents met the criteria for anorexia or bulimia, significant numbers had taken action to try and control their weight.

Social media apps like Instagram and Snapchat were considered the most persuasive on young minds.

Cyber safety expert Ross Bark said the role of social media influencers, who post their desirable, but often unrealistic lives online, are a new phenomenon impacting young consumers.

“They’re effectively being driven to be like these influencers when in fact it’s impossible for them to do so. So we’re seeing a dramatic upturn in kids that a lot of anxiety and mental health issues because of that,” Mr Bark said.

For further details on the SBS News story click here

Parents taking over Facebook leads to teens switching off the app

Annabel Hennessy, The Daily Telegraph

FACEBOOK is no longer considered cool by Australian teens with its popularity among youngsters plummeting 70 per cent in two years.

A new survey by Best Enemies Education of 800 Australians aged 13-18 has revealed just 11.57 per cent say the Mark Zuckerberg site is their most used app — a dramatic decline from two years ago when it was ranked number one.

Meanwhile, Instagram and Snapchat’s popularity is soaring, with more than half of teens saying Instagram is their most used app and about one in four saying they use Snapchat the most.

Best Enemies director Ross Bark, who runs cyber-safety courses in NSW schools, conducted the research and said teens no longer wanted to be on Facebook because it had been taken over by their parents.

“They want to use apps where they’re not going to be monitored,” Mr Bark said.

“Even on Instagram a lot of teens have two accounts; one which they get their family members to follow and another where they’ll be … posting risqué content.”

Read the full article in The Daily Telegraph here