Teenage boys more likely to be recorded as sexual assault offenders than males in any age group, says report

Annabel Hennessy – The West Australian

Teenage boys are more likely to be recorded by police as sexual assault offenders than males in any other age group, according to a new report.

And more than one in three of those convicted of sexual assault charges in Australia are not being jailed for their crimes.

It comes amidst growing concern about the influence the easy access to online pornography is having on young people with primary schools now being forced to deal with children attempting to copy sexual material they have viewed online.

The report on sexual assault from the Australian Institute of Health and Welfare, which is set to be released today, found that teenage boys aged 15-19 had the highest offender rates of any male age group — with 102.9 cases per 100,000 in 2018-19.

For men aged 35-44 there were less than 70 cases per 100,000.

For men aged 55-64 and 65 and over there were fewer still with less than 50 cases per 100,000 in each of these categories.

Men were also the offenders in 97 per cent of sexual assault cases recorded by police.

While teenage girls aged 15-19 were also most likely to be the victim of sexual assaults.

And females across all age groups were seven times more likely to be victims than men.

Of those aged over 15 and found guilty of sexual assault, 57 per cent received a custodial sentence in a correctional institution. This means 43 per cent avoided jail, instead receiving a custodial sentences that were served in the community, a suspended sentence or a non-custodial order.

‘Victims can experience physical injury, but also many other, ongoing effects, such as fear, anxiety, and changes to their sleep, diet and social routines, as well as their ability to work,’ said AIHW spokeswoman Louise York.

The shocking statistics come after a report from the Federal Government Social Policy and Legal Affairs Committee, released earlier this year, found there was “widespread and genuine concern” about online pornography and its “serious impacts on the welfare of children and young people”.

Best Enemies co-founder Ross Bark, whose company runs cyber safety workshops in Australian schools, said teachers were dealing with cases where kids as young as 12 were emulating sexual acts they had seen online at school.

Mr Bark said while there was now more attempts to teach young people about consent, too many children were still getting their sexual education online.

“They’re easily able to get access to material online that they shouldn’t get access to and its normalising dangerous behaviours,” Mr Bark said.

“With sexual abuse there can be a misconception that when a younger kid assaults another kid it is somehow less harmful than other predatory situations but it has long term consequences.”

Collective Shout campaign manager Caitlin Roper, whose organisation campaigns against the sexualisation of girls, said porn was having a significant impact on both the attitudes and the sexual practices of young people.

“Mainstream porn fails to promote safe sex, consent, respect or mutual pleasure. It depicts men as sexually dominant and entitled to use women in any way they see fit. Mainstream porn depicts women as sexual objects who never say no, and who enjoy painful and degrading sex acts,” she said.

If you or someone you know has been a victim of sexual abuse, there is support available. Call 1800 RESPECT, the national sexual assault and domestic family violence counselling service.

Read the full article here

Calls for 4chan to be shut down after putrid posts of alleged Christchurch shooter Brenton Tarrant discovered

Annabel Hennessy – The West Australian

A website that was temporarily shut down following accused gunman Brenton Tarrant’s Christchurch massacre has been found to be promoting images and web chats labelling the Australian a “saint”.

An investigation by The West Australian has uncovered the posts, some made this month, on notorious chat website 4chan, which has prompted concern by cyber experts who say the Federal Government should consider shutting the platform down permanently.

“You do have a duty of care to these people and maybe that is through shutting these sites down.” Ross Bark, director of Best Enemies which runs cybersafety workshops in Australian schools, said 4chan was the “wild west of the internet” and telecommunication companies should consider blocking access.

“Wherever there is a violent video (links to it) will land on 4chan … it’s just a massive trolling environment and a lot of online harassment stems from the use of 4Chan,” Mr Bark said.

“You’re not going to be able to stop people sharing things on certain sites unless you block them and I think there needs to be more from the telcos’ side to block these websites.”

He said the Federal Government’s Sharing of Abhorrent Violent Material) Bill was yet to be fully tested.

“Facebook isn’t that keen to censor information…(and these) laws need to be tested and then (the Government can) see how they can potentially tighten them up,” he said.

Mr Fletcher said the Federal Government was “committed to removing illegal and harmful content from the internet” and that there were reporting mechanisms in place to take down posts that were deemed abhorrent.

“Executing on that intention requires a considered and measured approach focused on the very worst content,” Mr Fletcher said.

Read the full article here

YouTube warn parents responsible for children’s behaviour on video-streaming platform

Annabel Hennessy – The West Australian

YouTube is warning parents they are responsible for their children’s behaviour on the site after a boom in the number of kids making their own online videos.

The video-streaming platform announced it was updating its terms of service, with changes including a new warning which says parents are liable for their kids’ behaviour on the site.

The changes come after YouTube was hit with a $US170 million ($250 million) fine from American regulators who found the site had “knowingly and illegally” harvested children’s personal information and used it to bombard them with targeted ads.

There has also been a big increase in the number of child YouTubers with a number of the site’s most popular channels now starring children.

While YouTube says its platform is only to be used by children aged 13 and older, cyber safety experts said they did not believe enough was being done to detect accounts belonging to underage children.

Best Enemies director Ross Bark, whose company runs cyber safety workshops, said YouTube had been inconsistent on its age-limit policy and should do more to detect underage accounts.

He said the changes were to protect the platform “legally”.

“I agree that parents do need to take responsibility … and can’t rely on the site to control their child’s viewing habits, however, YouTube could also do a lot more to educate parents.” he said.

World-first online safety hub tackles cyberbullying and harmful online content

7 News Report

A world-first one-stop-shop has been launched for Australian parents worried about the content their children might be confronted with online.

The new portal will provide up to date resources and reporting tools to keep the whole family informed.

You can find the National Online Safety Hub here.

Australian online safety website tackling cyberbullying and harmful online content

Worried parents will now have a new tool to keep their kids safe from cyberbullying and confronting material online.The website launched today and will provide up to date resources and reporting tools to keep the whole family informed.More Info: https://7news.link/kLbubg#OnlineSafety #7NEWS

Posted by 7NEWS Sydney on Saturday, 26 October 2019

Experts warn not to dismiss Joker-related threats as fears emerge film could inspire anti-women extremism

Annabel Hennessy – The West Australian

Violent online threats being inspired by the new Joker movie should not be dismissed as trolling, according to online experts and anti-abuse campaigners who are worried the blockbuster could fuel anti-women extremists.

It comes as police in NSW are running patrols in Randwick, in Sydney’s east, after a threat was posted on the notorious internet forum 4Chan appearing to warn of a potential attack at a screening of the film at a popular cinema in the suburb.

In the US, security around cinemas has also been beefed up after fears the film’s graphic portrayal of a social outcast who commits violent crimes after being sexually rejected could inspire copy-cat attacks.

The movie, starring Joaquin Phoenix and directed by Todd Phillips, has been likened to Martin Scorsese’s Taxi Driver starring Robert De Niro.

Cyber safety expert Ross Bark said he was worried about parents allowing children to see the film thinking it would be similar to a more typical superhero flick.

“I don’t think anyone under the age of 18 should be seeing this film,” he said.

Curtin University senior lecturer in Literary and Cultural Studies Dr Christina Lee said it would go a long way for the cast and crew of the film to openly condemn violence and talk about how The Joker taps into the current climate of extreme divisiveness.

“ The film is a fictional representation of a comic book supervillain … not as an instructional video,” Dr Lee said.

“This, however, won’t stop certain people who identify as incels … using the movie as propaganda.”

Read the full article here

Influencers out of business as Instagram removes ‘likes’ to tackle mental health impact

Annabel Hennessy – The West Australian

Influencers who falsely inflate their popularity could be put out of business by Instagram’s decision to “hide” likes, according to social media experts.

It comes as the Mark Zuckerberg-owned app has been accused of rolling out the new feature simply as a marketing gimmick rather than a genuine attempt to address its impact on mental health.

Instagram yesterday announced Australian users would no longer see the number of “likes” a post receives, claiming they wanted to “take the competition out of posting”.

The West Australian can also reveal Instagram’s algorithm, which promotes posts with more likes to the top of the feed, will remain the same. Dan Anisse, the vice-president of product at InfluencerDB, said the announcement was bad news for professional Instagrammers who scored brand deals after buying likes.

Social media expert Ross Bark, whose company Best Enemies runs cyber safety workshops in schools, said Instagram would also change its algorithm if it was genuinely concerned about mental health.

“It’s absolutely a business decision to try and get people to post more,” Mr Bark said. “They’re not changing the algorithm so you’ve still got that herd mentality of ‘it’s a competition’.”

Read the full article here

Are Facebook and other social sites doing enough to stop hate online?

As Facebook announced it was purging the profiles of Louis Farrakhan, Milo Yiannopoulos, InfoWars and others from its platforms as they were designated as ‘dangerous’, the question should be asked why they didn’t remove these accounts at the time they were determined to have be in violation of Facebook’s rules, rather than in one big announcement.

Or was this designed to generate positive publicity for Facebook, given their history of slow action and increasing public pressure?

Facebook recently announced that “Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and white separatism.”

However, seven weeks after the Christchurch mosque attack, parts of the livestream video of the attack as of Thursday were still available on Facebook and Instagram, with CNN Business reporting yesterday that it had obtained nine versions of the livestream from Eric Feinberg of the Global Intellectual Property Enforcement Center, which tracks online terror-related content.

Feinberg said he has identified 20 copies of the video live on the platforms in recent weeks and Facebook’s policy director Brian Fisherman reportedly told the United States congress that the company’s livestream algorithm didn’t detect the massacre because there wasn’t “enough gore”.

Clearly in this case their algorithms need further enhancement and while Facebook, Twitter and YouTube employ tens of thousands of content moderators, whose job is in part to enforce their codes of conduct by removing statements, videos and images that don’t comply, the process continues to be reactive and slow.

Further the codes of conduct across various providers e.g. between Google and Facebook are not consistent and the way in which they enforce and manage hate speech online are not the same.

Three weeks ago, the U.K. government released a detailed proposal for new internet laws that would dramatically reshape the ways in which social media companies like Facebook operate. While the proposal remains preliminary, the plan includes setting up an independent social media regulator and giving the U.K. government sweeping powers to fine tech companies for hosting content like violent videos, hate speech, misinformation and more, and as with Australia’s new Criminal Code Amendment that was passed earlier last month, social media executives like Zuckerberg could even be held personally responsible if their platforms fail to enforce the plans.

Australia’s new laws mean that platforms anywhere around the world are required to notify the Australian Federal Police (AFP) of any “abhorrent violent conduct” being streamed once they become aware of it. Failing to notify the AFP can result in fines of up to $168,00 for an individual or $840,000 for corporations. It also makes it a criminal offence for platforms not to remove abhorrent violent material “expeditiously”.

In the modern world social media has become fundamental to how we communicate. It is estimated that global social media users have reached over 3 billion in number and with around 2 billion active Facebook accounts these challenges and the usage of social media is only going to grow.

The issue is how do we control hate speech and incitement to violence on social media across all mainstream platforms?

Ultimately, in the world of social media, legislation is unable to quickly adapt to the ever-changing conditions of online publication and distribution of content. The answer I believe is not to rely on the companies themselves to do all the work to manage this and what’s needed urgently is a model of regulation that can deal with the myriad of challenges the world of social media creates.

While individual country laws and proposals will help, I believe that the best way to ensure genuine protections and a transparent approach is an independent regulator, as is used to promote accountability and ethical standards in the traditional print media.

Voluntary regulation at the industry level, which includes the adoption of a universal code of conduct and the creation of a body that will ensure its application, provides a much more effective system to address these challenges and I think will also help in driving improvements in technology to more effectively enforce and manage the removal of hate speech, sexual and violent online content across all social platforms.

Christchurch mosque shootings: Social media giant Facebook recommends users search for New Zealand’s pain

Annabel Hennessy – The West Australian

Facebook is encouraging users to search its platforms for the horrific Christchurch shooting video through its recommended keywords.

Social media experts have attacked the Silicon Valley giant for including “search suggestions” of “Christchurch live stream”, “Christchurch shooting footage” and “Christchurch video” when users type “Christchurch” into the site.

Cyber expert Ross Bark, whose company Best Enemies runs internet safety programs in Australian schools, said he was concerned Facebook’s algorithms were contributing to the self-radicalisation of extremists.

“Facebook might say they’ve removed at least one and a half million videos of the attack, but I think from a search perspective that actually needs to be more controlled,” he said.

“I don’t think they’re doing enough in terms of managing that. They need to tighten up in terms of their algorithms.”

Read the full article here

How safe is your children’s data online?

The TODAY Show – Channel 9

How much information are your children sharing online about themselves? It might seem like they live private lives, but in reality, just the simple use of an app can lead to their data being exposed.

On average, more than seventy-two million pieces of online data are collected from your child before the age of thirteen.

Watch Ross Bark’s interview on the Channel 9 TODAY Show to see how we all need to be careful about the information we share online.