LinkedIn announces more stringent measures against inappropriate content on its platform
social media

LinkedIn announces more stringent measures against inappropriate content on its platform

LinkedIn announces stricter measures against inappropriate content on its platform
Amid various controversial debates and fears that are now perhaps becoming even more heated as the US elections approach, LinkedIn this week outlined a number of new measures it is implementing to ensure that its participants feel comfortable. … and is protected when held by the platform.
LinkedIn explains:

Every LinkedIn member is entitled to a safe, secure, and professional experience on our platform. We’ve heard from some of you that we should set a higher bar for safe communication given the professional context of LinkedIn. We couldn’t agree more. We strive to ensure that the conversation remains respectful and professional.

Making politics stronger and clearer LinkedIn is working to update its Professional Community Policy to clarify that there is no place on our platform for hateful, offensive, inflammatory or racist content.

In this ever-changing world, people are increasingly discussing sensitive topics on LinkedIn, and it is very important that these conversations remain constructive and respectful, not harmful. When we see content or behavior that violates our policies, we will remove it immediately.

LinkedIn also notes that it is releasing new educational content to help users understand their obligations in this regard, which will appear as toast notifications or reminders when you post, send a message, or otherwise interact.
Using artificial intelligence and machine learning to protect against inappropriate content
LinkedIn says it is also working with parent company Microsoft to keep LinkedIn feed relevant and professional.

More recently, we have extended our defenses with new AI models to find and remove profiles containing inappropriate content and created the LinkedIn Fairness Toolkit (LiFT) to help us measure multiple definitions of fairness in large-scale machine learning workflows.

Earlier this week, LinkedIn posted a complete LinkedIn Fairness Toolkit (LiFT) review that helps:

LinkedIn announces more stringent measures against inappropriate content on its platform
a fairer platform to avoid harmful bias in our models and to ensure that people of equal talent have equal access to job opportunities.

Creating economic opportunity for every member of the global workforce is now a key challenge for former LinkedIn CEO Jeff Weiner, who left his previous position in June to take on this new direction. The COVID-19 pandemic could actually provide new opportunities to drive significant change in these areas – as we seek to bring the economy back to normal after the pandemic, this could provide a new opportunity to implement updated quality standards that can help reduce systemic bias.

It’s not an easy task, but LinkedIn is already taking steps in this direction.
In addition to this, LinkedIn also recently rolled out a new process for detecting and hiding inappropriate InMail messages, addressing another key issue for users.

Buy More Subscribers: buy 50 youtube subscribers

Leave a Reply

Your email address will not be published. Required fields are marked *