Advanced search
2 files | 4.20 MB Add to list

Coping with network dynamics using reinforcement learning based network optimization in wireless sensor networks

Milos Rovcanin (UGent) , Eli De Poorter (UGent) , Ingrid Moerman (UGent) and Piet Demeester (UGent)
Author
Organization
Abstract
Due to the constant increase of in density of wireless network devices, inter-network cooperation is increasingly important. To avoid conflicts, advanced algorithms for inter-network optimization utilize cognition processes such as reinforcement learning to enable networks to become capable of solving complex optimization problems on their own with minimal outside intervention. This paper investigates the inherent trade-offs that occur when using reinforcement learning techniques in dynamic networks: the need to keep the network running optimally whilst, at the same time, different (suboptimal) network settings need to be continuously investigated to cope with changing network conditions. To cope with these network dynamics, two existing algorithms, "epsilon greedy" and Softmax, are compared to a novel approach, based on a logarithmic probability distribution function. It is shown that, depending on the expected level of dynamics, the new algorithm outperforms existing solutions.
Keywords
Self-awareness, Network cooperation, Reinforcement learning, Linear approximation, Network service negotiation, (sic) greedy, Logarithmic state access distribution, IBCN

Downloads

  • (...).pdf
    • full text
    • |
    • UGent only
    • |
    • PDF
    • |
    • 3.16 MB
  • 5996 i.pdf
    • full text
    • |
    • open access
    • |
    • PDF
    • |
    • 1.04 MB

Citation

Please use this url to cite or link to this publication:

MLA
Rovcanin, Milos et al. “Coping with Network Dynamics Using Reinforcement Learning Based Network Optimization in Wireless Sensor Networks.” WIRELESS PERSONAL COMMUNICATIONS 76.2 (2014): 169–191. Print.
APA
Rovcanin, M., De Poorter, E., Moerman, I., & Demeester, P. (2014). Coping with network dynamics using reinforcement learning based network optimization in wireless sensor networks. WIRELESS PERSONAL COMMUNICATIONS, 76(2), 169–191.
Chicago author-date
Rovcanin, Milos, Eli De Poorter, Ingrid Moerman, and Piet Demeester. 2014. “Coping with Network Dynamics Using Reinforcement Learning Based Network Optimization in Wireless Sensor Networks.” Wireless Personal Communications 76 (2): 169–191.
Chicago author-date (all authors)
Rovcanin, Milos, Eli De Poorter, Ingrid Moerman, and Piet Demeester. 2014. “Coping with Network Dynamics Using Reinforcement Learning Based Network Optimization in Wireless Sensor Networks.” Wireless Personal Communications 76 (2): 169–191.
Vancouver
1.
Rovcanin M, De Poorter E, Moerman I, Demeester P. Coping with network dynamics using reinforcement learning based network optimization in wireless sensor networks. WIRELESS PERSONAL COMMUNICATIONS. 2014;76(2):169–91.
IEEE
[1]
M. Rovcanin, E. De Poorter, I. Moerman, and P. Demeester, “Coping with network dynamics using reinforcement learning based network optimization in wireless sensor networks,” WIRELESS PERSONAL COMMUNICATIONS, vol. 76, no. 2, pp. 169–191, 2014.
@article{5733227,
  abstract     = {Due to the constant increase of in density of wireless network devices, inter-network cooperation is increasingly important. To avoid conflicts, advanced algorithms for inter-network optimization utilize cognition processes such as reinforcement learning to enable networks to become capable of solving complex optimization problems on their own with minimal outside intervention. This paper investigates the inherent trade-offs that occur when using reinforcement learning techniques in dynamic networks: the need to keep the network running optimally whilst, at the same time, different (suboptimal) network settings need to be continuously investigated to cope with changing network conditions. To cope with these network dynamics, two existing algorithms, "epsilon greedy" and Softmax, are compared to a novel approach, based on a logarithmic probability distribution function. It is shown that, depending on the expected level of dynamics, the new algorithm outperforms existing solutions.},
  author       = {Rovcanin, Milos and De Poorter, Eli and Moerman, Ingrid and Demeester, Piet},
  issn         = {0929-6212},
  journal      = {WIRELESS PERSONAL COMMUNICATIONS},
  keywords     = {Self-awareness,Network cooperation,Reinforcement learning,Linear approximation,Network service negotiation,(sic) greedy,Logarithmic state access distribution,IBCN},
  language     = {eng},
  number       = {2},
  pages        = {169--191},
  title        = {Coping with network dynamics using reinforcement learning based network optimization in wireless sensor networks},
  url          = {http://dx.doi.org/10.1007/s11277-014-1684-4},
  volume       = {76},
  year         = {2014},
}

Altmetric
View in Altmetric
Web of Science
Times cited: