Skip to content
Advances in Decision Sciences (ADS)

Advances in Decision Sciences (ADS)

Published by Scientific and Business World, Singapore

  • About This Journal
    • Aim and Scope
    • Abstracting and Indexing
    • Editorial Board
    • Editorial Workflow
    • Publication Ethics
    • Paper Submission
    • Manuscript Format
    • Manuscript FAQ
    • Subscription Information
  • Editors Menu
    • Editors’ Roles and Responsibilities
    • Handling a Manuscript
    • Peer Review at ADS
    • English Editing
  • Special Issues
    • About Special Issues
    • Editorial Board Special Issues
    • Preparing a Call for Papers
    • Promoting a Call for Papers
    • Special Invitation
    • Special Issues FAQ
    • Published Special Issues
  • Table of Contents
    • Vol 30, Year 2026
    • Vol 29, Year 2025
    • Vol 28, Year 2024
    • Vol 27, Year 2023
    • Vol 26, Year 2022
    • Vol 25, Year 2021
    • Vol 24, Year 2020
    • Vol 23, Year 2019
    • Vol 22, Year 2018
    • Archive Contents for Year 1997 to 2017
      • Table of Contents for Year 2017
      • Table of Contents for Year 2016
      • Table of Contents for Year 2015
      • Table of Contents for Year 2014
      • Table of Contents for Year 2013
      • Table of Contents for Year 2012
      • Table of Contents for Year 2011
      • Table of Contents for Year 2010
      • Table of Contents for Year 2009
      • Table of Contents for Year 2008
      • Table of Contents for Year 2007
      • Table of Contents for Year 2006
      • Table of Contents for Year 2005
      • Table of Contents for Year 2004
      • Table of Contents for Year 2003
      • Table of Contents for Year 2002
      • Table of Contents for Year 2001
      • Table of Contents for Year 2000
      • Table of Contents for Year 1999
      • Table of Contents for Year 1998
      • Table of Contents for Year 1997
  • Contact Us
  • Home

Multi-Objective Constrained Reinforcement Learning for Joint Routing–MAC–Duty Cycling in Low-Power Wireless Sensor Networks

Multi-Objective Constrained Reinforcement Learning for Joint Routing–MAC–Duty Cycling in Low-Power Wireless Sensor Networks

Title

Multi-Objective Constrained Reinforcement Learning for Joint Routing–MAC–Duty Cycling in Low-Power Wireless Sensor Networks

Authors

  • Ghaida Muttashar Abdulsahib
    College of Computer Engineering, University of Technology, IRAQ
  • Mohammed Awad Mohammed Ataelfadiel
    Applied College, King Faisal University, Saudi Arabia

Abstract

Introduction: Wireless Sensor Networks (WSNs) face significant challenges in balancing energy efficiency, latency, and reliability while operating under severe resource constraints. Current methods either optimize network layers separately or use static cross-layer coordination, which doesn’t work well when conditions change.
Purpose: The aim of this study is to introduce a Constrained Multi-Objective Reinforcement Learning Model (CMORLM) for optimizing Joint Routing, Medium Access Control (MAC), and Duty Cycling Optimization (DCO) in low-power WSNs.
Methods: In this paper, we have suggested the CMORLM approach as a constrained Markov Decision Process (MDP) with three competing goals: lowering Energy Consumption (EC), lowering End-to-End-Latency (EEL), and raising Packet Delivery Ratio (PDR). There are strict limits on the amount of residual energy, the buffer size, and the Quality of Service (QoS) requirements. Lagrangian Constraint Handling (LCH) and multi-objective policy gradients are combined within the primal-dual optimization method. For routing, MAC, and DCO, the policy network uses a shared encoder with factorized heads. Federated Gradient Aggregation (FGA) is used for distributed learning across Sensor Nodes (SN).
Results: Testing in NS-3 shows that EC is 34.2% lower, EEL is 41.3% higher, and PDR is 16.5% higher than Traditional Layered Protocols (TLP). Network Lifetime (NL) goes up by 38.4%. The constraint violation rate (CVR) is still below 1%. This is 23 times less than the CMORLM that was suggested. Ablation studies show that joint optimization increases the EC by 44.7% over single-layer control.
Conclusion: The suggested CMORLM works well on networks with 50 to 200 nodes and can handle changes in traffic, node failures, and mobile sinks. To enable operator control over performance trade-offs through weight configuration, Pareto frontier analysis is performed.

Keywords

WSNs, reinforcement learning, multi-objective optimization, constrained Markov decision processes, cross-layer optimization, energy efficiency

Classification-JEL

C44, C61, L96, C63

Pages

197-229

How to Cite

Abdulsahib, G. M. A., & Awad Mohammed Ataelfadiel, M. (2026). Multi-Objective Constrained Reinforcement Learning for Joint Routing–MAC–Duty Cycling in Low-Power Wireless Sensor Networks. Advances in Decision Sciences, 30(2), 197-229.

https://doi.org/10.47654/v30y2026i2p197-229

Post navigation

Previous PostMulti-Objective Constrained Reinforcement Learning for Joint Routing–MAC–Duty Cycling in Low-Power Wireless Sensor Networks

Submit Paper

Register / Submit




Special Issue Information

About Special Issues

Categories

ISSN 2090-3359 (Print)
ISSN 2090-3367 (Online)

Scientific and Business World

Asia University, Taiwan

8.3
2024CiteScore
 
88th percentile
Powered by  Scopus
SCImago Journal & Country Rank
Q2 in Scopus
CiteScore 2024 = 8.3
CiteScoreTracker 2025 = 8.2
SNIP 2024 = 0.632
SJR Quartile = Q1
SJR 2024 = 0.814
H-Index = 18

Flag Counter
Since July 28, 2021

Powered by Headline WordPress Theme
Go to mobile version