Sunday, September 14, 2025

Summary of Forensic Plagiarism Analysis Report: IEEE IoT Newsletter vs. ComputerWorld


   © 2025 Charles Sun. All rights reserved.

On September 14, 2025, I submitted a formal complaint to IEEE regarding documented plagiarism in their IoT Newsletter article “IPv6 and Internet of Things: Prospects for Latin America” (July 17, 2017). The article contains multiple instances of unattributed reuse from my original ComputerWorld piece “No IoT Without IPv6” (May 19, 2016).


🔍 Key Findings
  • Total Instances Identified: 7

  • Severity Breakdown: 5 Major, 2 Moderate

  • Types of Misappropriation:

    • Verbatim copying of slogans and core arguments

    • Paraphrased data and framing without attribution

    • Direct reuse of numeric values and phrasing

    • Concealed citations of expert analysis and market data


📌 Representative Example

ComputerWorld (May 2016): “No IoT Without IPv6” (title); “…the IoT won’t be happening without IPv6.”

IEEE IoT Newsletter (July 2017): “There is no IoT without IPv6.” (¶ 4 & Conclusions)

Classification: Major — Verbatim slogan / core thesis


📊 Full Comparative Evidence Table

The full table documents all seven instances with side-by-side excerpts, paragraph mapping, and severity classification.


📣 Status & Next Steps

The full report was submitted to IEEE on with a deadline for corrective action by .

If no resolution is reached, the complete evidence package — including visuals, correspondence, and recommended actions — will be published in full and shared with relevant stakeholders.


Citation Formats for This Article:

APA (7th Edition) Citation
Charles Sun. (2025, September 14). Summary of forensic plagiarism analysis report: IEEE IoT Newsletter vs. ComputerWorld. IPv6 Czar's Blog. https://ipv6czar.blogspot.com/2025/09/summary-of-forensic-plagiarism-analysis.html

MLA (9th Edition) Citation
Sun, Charles. "Summary of Forensic Plagiarism Analysis Report: IEEE IoT Newsletter vs. ComputerWorld." IPv6 Czar's Blog, 14 Sept. 2025, https://ipv6czar.blogspot.com/2025/09/summary-of-forensic-plagiarism-analysis.html.

Chicago (17th Edition) Citation
Charles Sun. "Summary of Forensic Plagiarism Analysis Report: IEEE IoT Newsletter vs. ComputerWorld." IPv6 Czar’s Blog. September 14, 2025. https://ipv6czar.blogspot.com/2025/09/summary-of-forensic-plagiarism-analysis.html


Disclaimer: The views presented are only personal opinions and do not necessarily represent those of the U.S. Government.

#PlagiarismExposed #PublishingEthics #IEEEAccountability #ForensicDocumentation #IntellectualIntegrity #NoIoTWithoutIPv6 #CitationMatters #TechTransparency #DataMisuse #CharlesSunReports


© 2025 Charles Sun. All rights reserved.




Tuesday, September 9, 2025

Charles Sun Named to 2026 Engage National Security & Enforcement 150





I’m honored to share some exciting news: I’ve been named to the 2026 Engage National Security & Enforcement 150, a peer‑driven recognition that celebrates leaders who build trust, foster collaboration, and deliver real solutions across our national security and enforcement communities. This honor is especially meaningful because it reflects the voices of colleagues and partners who see the value in showing up with intention, humility, and a shared sense of mission. Below is the official press release with all the details, including the full list of honorees and the criteria that define this recognition.


FOR IMMEDIATE RELEASE


Media Contact:

Charles Sun
Email: press@aucglobal.com
LinkedIn: linkedin.com/in/charlessun

 

WASHINGTON, DC — September 9, 2025Charles Sun, a federal IPv6 thought leader and Information and Communications Technology (ICT) executive whose leadership has been recognized nationally, has been named to the 2026 Engage National Security & Enforcement 150, an annual list curated by OrangeSlices AI honoring leaders who exemplify collaboration, innovation, and mission impact across the national security and enforcement community.

Nominated and selected by peers, honorees are recognized not for their titles, but for how they lead — listening with intention, creating space for others, and building trust across boundaries to deliver real, workable solutions.

“I am deeply honored — and truly humbled — to be recognized alongside such an extraordinary group of leaders,” said Sun. “This award reflects the collective efforts of colleagues, partners, and mentors who make this work possible every day.”

The 2026 honorees represent agencies and partners across the U.S. Department of Homeland Security, U.S. Department of State, U.S. Department of Justice, FBI, and more.

“These leaders are already shaping how national security and enforcement agencies deliver on behalf of the people they serve,” said Shelley McGuire, Chief Strategy Officer at OrangeSlices AI. “They are the ones others look to for guidance, collaboration, and momentum.”

Selection Criteria

Honorees were evaluated using a prescribed rubric, with the following six characteristics serving as core evaluation criteria. These qualities reflect what it truly means to lead with purpose, collaboration, and impact in today’s Federal environment:

  • Open Communication – They share insights through events, published thought pieces, or open platforms like LinkedIn — because they believe transparency and shared knowledge drive progress.
  • Mission‑Focused Thinking – Their perspective extends beyond organizational walls. They prioritize the needs of the communities they serve, actively seeking feedback and collaboration to drive better outcomes.
  • Reliable Partnership – They lead with clarity, humility, and accountability. These individuals are trusted partners who understand their role — and the roles of others — and consistently deliver on shared goals.
  • Community Presence – These leaders are actively involved. Whether at events, in working groups, or through mentorship, they show up, contribute, and help lift the entire ecosystem.
  • Perspective‑Seeking – Engaged leaders welcome input from a wide range of stakeholders — including users, partners, and community voices — to inform more complete, inclusive, and effective solutions.
  • Innovation‑Minded – They challenge the status quo. By championing new technologies, processes, and ideas, they help drive progress and adaptability in an increasingly complex environment.

These characteristics are core to the Engage GovCon ethos and provide all leaders across government and industry a model for how to collaborate, engage, and thrive.

About Charles Sun

Charles Sun has over three decades of leadership experience in ICT across the public and private sectors. He has served as Technology Co‑Chair of the U.S. Federal IPv6 Task Force, chaired the Federal IPv6 Technical Roundtable, and held senior ICT leadership roles at the U.S. Department of Homeland Security, Export‑Import Bank of the United States, and the U.S. Department of Commerce — including the U.S. Census Bureau as well as the U.S. Department of Labor’s Bureau of Labor Statistics. Across these agencies, he advanced IT modernization, infrastructure resilience, cybersecurity posture, and operational efficiency. In the private sector, Sun has excelled as a senior network engineer and consultant for organizations including the University of Maryland, Northrop Grumman, AOL Time Warner, and Georgetown University. Since 2018, he has been a columnist for Homeland Security Today on Internet security and IPv6 deployment.

About OrangeSlices AI

OrangeSlices AI Playful Name. Serious about Democratizing Data and Disrupting the GovCon Competitive Intelligence Market. The core mission for OS AI is to identify, share and create timely, actionable and responsible information and data products, tools and resources that 1) are accessible to all organizations and their teams, small to large; 2) will assist Federal government and Industry IT and consulting leaders to more effectively identify and engage with each other; and 3) shine a spotlight on those leaders and companies that are #DoingItRight.

Meet the 2026 honorees: https://lnkd.in/eH5Ssq2n


###

Monday, August 25, 2025

The Unconscious Bias in the Human Mind: How It Seeds AI Flaws


 © 2025 Charles Sun. All rights reserved.

Artificial intelligence doesn’t operate in a vacuum. Long before model training begins, human cognition—both conscious and unconscious—shapes the data, labels, and “ground truth” that algorithms rely on. Understanding this process is key to building ethical, fair, and reliable AI systems. This article complements my main piece, Bias Is Not a Bug—It’s a Mirror: Why AI Reflects Us More Than We Realize, with a technical deep dive into the examination of the neurocognitive and implicit human biases underlying algorithmic bias.

 

Executive Summary

1. Understanding Unconscious Bias and Fast/Slow Thinking

  • Definition: Unconscious (implicit) bias refers to automatic mental associations that influence judgments without conscious awareness or intent, especially under time pressure or cognitive load. (Greenwald & Banaji, 1995 PubMed | PDF).
  • Dual-process theory: Fast, heuristic “System 1” thinking can trigger stereotype-consistent responses; slower, deliberative “System 2” reasoning may override biases but requires attention and time—often scarce in high-stakes settings (Kahneman, 2011 review PDF; Gawronski, 2024 lecture notes PDF).

2. Evidence Across Domains

  1. Healthcare: A systematic review found many clinicians exhibit implicit pro‑White/anti‑minority bias, with associations to differences in communication, treatment decisions, and health outcomes (Hall et al., 2015 PubMed; AHRQ summary).
  2. Hiring: Field experiments showed resumes with “White‑sounding” names received ~50% more callbacks than otherwise identical resumes with “Black‑sounding” names (Bertrand & Mullainathan, 2004 NBER | PDF).
  3. Split-second decisions: Shooter task simulations indicate lower thresholds to “shoot” Black targets, illustrating automatic threat associations (University of Chicago summary with references).

3. Measurement Limits and Durability of Change

  • Predictive validity: Implicit measures can predict behavior but with small, context-dependent effect sizes (Forscher et al., 2019 PubMed | PDF).
  • Durability: Brief interventions rarely produce lasting behavioral change without structural support (Frontiers in Psychology, 2019 link).

4. How Bias Enters the AI Pipeline

  1. Problem framing: Metrics like healthcare spending as a proxy for patient need encode disparities (Obermeyer et al., 2019 Science).
  2. Data selection: Representation drives both model performance and equity (Berkeley Haas EGAL Playbook).
  3. Labeling: Annotator context and demographics shape “ground truth” (ACL SocialNLP, 2020 PDF).
  4. Aggregation: Crowdsourced labels require bias-aware modeling (Chen et al., 2023 arXiv).
  5. Human+LLM pipelines: LLM-assisted annotation can codify bias (Zhang et al., 2024 – arXiv).
  6. Feedback loops: Biased outputs influence future data unless monitored (NIST SP 1270).
  7. Documentation: (Datasheets – arXiv, PDF, Model Cards – ACM DOI)

5. Human–AI Decision Dynamics

  • Automation bias: Prescriptive recommendations from AI sway human emergency decisions (MIT News, 2022 link).
  • Selective adherence: Public sector experiments show overreliance on biased AI advice (JPART, 2023 link).

6. Design Remedies That Work

7. Implementation Snapshot (Next 90 Days)

  • Define context-specific fairness requirements and add them to acceptance criteria and release gates.
  • Stand up a diverse, trained annotator pool; set IRR targets; retrain for drift.
  • Pilot bias-aware label aggregation; log adjudication rationale; update guidelines based on disagreements.
  • Add fairness gates to CI/CD: pre-release slice testing and post-release drift monitoring with rollback criteria (NIST AI RMF Playbook)
  • Document assumptions (Model Cards, Datasheets) and establish an escalation path for edge cases and vulnerable groups.

8. Metrics, Monitoring, and Documentation

  • Metrics: Select fairness metrics that match product context and harms (e.g., false-positive balance for moderation, calibration for risk scoring). Track alongside business KPIs; avoid “metric shopping.”
  • Monitoring: Treat fairness drift like performance drift; define thresholds and remediation playbooks.
  • Documentation: Maintain lineage from requirements → datasets → releases. Align with lifecycle guidance (NIST SP 1270, NIST AI RMF Playbook). Where relevant, monitor forthcoming requirements under the EU AI Act.

 9. Common Pitfalls and Anti-Patterns

  • Treating “ground truth” labels as objective without auditing annotator instructions and disagreement patterns
  • Relying on single-session trainings to “fix minds” instead of redesigning processes and incentives (Frontiers 2019)
  • Declaring success after improving one fairness metric in one slice while other harms persist
  • Shipping black-box models without documentation, escalation rules, or post-deployment monitoring

10. Cognitive Shortcuts Become AI Outputs

Bias often enters AI before modeling—through fast, automatic human cognition shaping data and labels. Upstream governance, disciplined workflow design, and continuous monitoring are essential to counteract these systematic skews.

The task isn’t to “perfect minds in a workshop,” but to engineer workflows—requirements, data, labeling, aggregation, testing, deployment, and monitoring—so that the fast doesn’t silently overrun the fair. With disciplined upstream controls and lifecycle governance, AI can help us see our blind spots—and then do something about them.

Summary & Call to Action

Bias in AI is ultimately a mirror of human cognition. Confronting it requires reflection and deliberate action:

Audit datasets and labels.
Implement bias-aware aggregation and continuous monitoring.
Embed diverse perspectives in decision-making.

 

Design AI that scales our best judgment, not our worst instincts. For broader context, see the main article: BiasIs Not a Bug—It’s a Mirror: Why AI Reflects Us More Than We Realize. Share your insights and experiences in the comments—let’s build better AI together.



References

Greenwald, A. G., & Banaji, M. R. (1995). Implicit social cognition: Attitudes, self-esteem, and stereotypes. Psychological Review, 102(1), 4–27. https://pubmed.ncbi.nlm.nih.gov/7878162/

Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux.Google Scholar+12Federal Trade Commission+12Berkeley Haas+12

Gawronski, B. (2024). Dual-process theory: An overview. Lecture Noteshttp://bertramgawronski.com/documents/GLC2024DPT.pdf

Hall, W. J., Chapman, M. V., Lee, K. M., Merino, Y., & Day, S. H. (2015). Implict racial/ethnic bias among health care professionals and its influence on health care outcomes: A systematic review. American Journal of Public Health, 105(12), e60–e76. https://pubmed.ncbi.nlm.nih.gov/26469668/

Bertrand, M., & Mullainathan, S. (2004). Are Emily and Greg more employable than Lakisha and Jamal? A field experiment on labor market discrimination. American Economic Review, 94(4), 991–1013. https://www.nber.org/papers/w9873

Obermeyer, Z., Powers, B. W., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://www.science.org/doi/10.1126/science.aax2342

 Forscher, P. S., Mitamura, C., Dixit, S., & Cox, W. T. (2019). A meta-analysis of the predictive validity of the Implicit Association Test and its ability to predict behavior. Perspectives on Psychological Science, 14(5), 678–692. https://pubmed.ncbi.nlm.nih.gov/31192631/

 Berkeley Haas. (2020). Mitigating Bias in Artificial Intelligence: An Equity Fluent Leadership Playbook. Center for Equity, Gender & Leadershiphttps://haas.berkeley.edu/equity/resources/playbooks/mitigating-bias-in-ai/aiethicslab.com+7Berkeley Haas+7Berkeley Haas+7

Chen, J., Zhang, Y., & Wang, H. (2023). Bias-aware aggregation for crowdsourced data labeling. arXivhttps://arxiv.org/abs/2302.13100

NIST. (2021). A Taxonomy and Terminology of Adversarial Machine Learning. National Institute of Standards and Technology Special Publication 1270https://www.nist.gov/publications/sp-1270

MIT News. (2022). When subtle biases in AI influence emergency decisions. https://news.mit.edu/2022/when-subtle-biases-ai-influence-emergency-decisions-1216

JPART. (2023). Selective adherence to AI recommendations in public sector decision-making. Journal of Public Administration Research and Theory, 33(1), 153–170. https://academic.oup.com/jpart/article/33/1/153/6524536

Appendix A. Implementation Playbook

       Leads & PMs

·       Define context-specific fairness requirements and go/no-go criteria

·       Resource annotation, calibration, and audits as first-class quality work

·       Require an Annotation Plan artifact (sampling, instructions, training, IRR targets, aggregation, adjudication)


Data/ML Teams

·       Pilot bias-aware aggregation; run ablations to quantify label-source effects

·       Build an edge-case escalations queue; fold resolutions back into guidelines

·       Add fairness gates to CI/CD; create slice dashboards and drift alarms linked to incident response


Ops/QA

·       Blind sensitive fields where feasible; rotate annotators to reduce priming

·       Track annotator diagnostics (consistency, disagreement patterns)

·       Log adjudication rationale; version and publish guideline changes

 

Selected Sources and Further Reading

·       Implicit bias and dual‑process: Greenwald & Banaji (1995); Shleifer’s review of Kahneman; Gawronski notes https://pubmed.ncbi.nlm.nih.gov/7878162/ https://faculty.washington.edu/agg/pdf/Greenwald_Banaji_PsychRev_1995.OCR.pdf https://scholar.harvard.edu/files/shleifer/files/kahneman_review_jel_final.pdf http://bertramgawronski.com/documents/GLC2024DPT.pdf

·       Domain evidence: Hall et al. (2015); Bertrand & Mullainathan (2004); shooter‑task summary https://pubmed.ncbi.nlm.nih.gov/26469668/ https://psnet.ahrq.gov/issue/implicit-racialethnic-bias-among-health-care-professionals-and-its-influence-health-care https://www.nber.org/papers/w9873 https://www.nber.org/system/files/working_papers/w9873/w9873.pdf https://magazine.uchicago.edu/0778/investigations/shooters_choice.shtml

·       Measurement limits: Forscher et al. (2019); Frontiers review https://pubmed.ncbi.nlm.nih.gov/31192631/ https://gwern.net/doc/psychology/cognitive-bias/2019-forscher.pdf https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2019.02483/full

·       Data/labels/aggregation: SocialNLP bias‑mitigation methods; Chen et al. (2023) crowdsourcing aggregation under observation bias; human/LLM labeler bias (2024) https://aclanthology.org/2020.socialnlp-1.2.pdf https://arxiv.org/abs/2302.13100 https://arxiv.org/abs/2410.07991

·       Decision dynamics and governance: MIT emergency‑decision study; JPART on public sector decisions; NIST SP 1270; Berkeley Haas playbook https://news.mit.edu/2022/when-subtle-biases-ai-influence-emergency-decisions-1216 https://academic.oup.com/jpart/article/33/1/153/6524536 https://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.1270.pdf https://haas.berkeley.edu/wp-content/uploads/UCB_Playbook_R10_V2_spreads2.pdf


Citation Formats for This Article:

APA:

Sun, C. (2025, August 25). The Unconscious Bias in the Human Mind: How It Seeds AI Flaws. Common Sense. https://ipv6czar.blogspot.com/2025/08/the-unconscious-bias-in-human-mind-how.html

Sun, C. (2025, August 25). The Unconscious Bias in the Human Mind: How It Seeds AI Flaws. LinkedIn. https://www.linkedin.com/pulse/unconscious-bias-human-mind-how-seeds-ai-flaws-charles-sun-cwfte

MLA:

Sun, Charles. "The Unconscious Bias in the Human Mind: How It Seeds AI Flaws." Common Sense, 25 Aug. 2025, https://ipv6czar.blogspot.com/2025/08/the-unconscious-bias-in-human-mind-how.html.

Sun, Charles. "The Unconscious Bias in the Human Mind: How It Seeds AI Flaws." LinkedIn, 25 Aug. 2025, https://www.linkedin.com/pulse/unconscious-bias-human-mind-how-seeds-ai-flaws-charles-sun-cwfte.

Chicago:

Sun, Charles. "The Unconscious Bias in the Human Mind: How It Seeds AI Flaws." Common Sense (blog), August 25, 2025. https://ipv6czar.blogspot.com/2025/08/the-unconscious-bias-in-human-mind-how.html.

Sun, Charles. "The Unconscious Bias in the Human Mind: How It Seeds AI Flaws." LinkedIn, August 25, 2025. https://www.linkedin.com/pulse/unconscious-bias-human-mind-how-seeds-ai-flaws-charles-sun-cwfte.


#ArtificialIntelligence #MachineLearning #DataScience #AIEthics #EthicalAI #ResponsibleAI #BiasInAI #AlgorithmicBias #FairnessInAI #AITransparency #DataBias #HumanInTheLoop #HumanCenteredAI #TrustworthyAI #AIandSociety #FutureOfAI #TechForGood #AIReflection #AIaccountability #AutomatedMachineLearning #AImirror

Disclaimer: The views presented are only personal opinions and do not necessarily represent those of the U.S. Government.

 

© 2025 Charles Sun. All rights reserved.