首页> 外文会议>IEEE International Symposium on Real-Time Distributed Computing >Time-efficient offloading for machine learning tasks between embedded systems and fog nodes
【24h】

Time-efficient offloading for machine learning tasks between embedded systems and fog nodes

机译:嵌入式系统和雾节点之间的机器学习任务时效卸载

获取原文

摘要

The Internet of Things (IoT) and Machine Learning (ML) introduce embedded systems to many new roles and functions, but the current status quo of using these technologies together can be improved. The status quo has embedded systems offloading all of their ML functionality to an external device, but this can lead to unpredictable throughput due to network instability. We propose to run low-complexity ML models on the embedded system itself and distribute the workload when it has been measured to bypass a Worst-Case Execution Time (WCET) threshold.
机译:物联网(物联网)和机器学习(ML)将嵌入式系统引入许多新的角色和功能,但可以提高使用这些技术的当前状态Quo。状态Quo已将所有ML功能卸载到外部设备,但这可能导致由于网络不稳定导致不可预测的吞吐量。我们建议在嵌入式系统上运行低复杂性ML模型,并在测量时分发工作负载以绕过最坏情况执行时间(WCET)阈值。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号