Dongguan University of Technology
AI Insights - The proposed scheme introduces a sparse codebook structure to achieve a favorable trade-off between reliability and complexity. (ML: 0.71)ππ
- The paper proposes a low-complexity SSC scheme that reduces encoding and decoding complexity with minimal BLER performance degradation. (ML: 0.69)ππ
- The proposed scheme may not be suitable for scenarios with high mobility or rapidly changing channels. (ML: 0.68)ππ
- BLER: Block Error Rate SSC: Sparse Superimposed Coding URLLC: Ultra-Reliable Low-Latency Communications The proposed low-complexity SSC scheme effectively balances reliability and complexity for short-packet transmission. (ML: 0.64)ππ
- The paper assumes that the channel state information (CSI) is available at the transmitter. (ML: 0.63)ππ
- Simulation results demonstrate the robustness of the proposed approach across different transmission block lengths and outperform conventional SVC under appropriate sparsity configurations. (ML: 0.57)ππ
- The scheme exhibits strong robustness across different transmission block lengths and outperforms conventional SVC under appropriate sparsity configurations. (ML: 0.50)ππ
Abstract
Sparse superimposed coding (SSC) has emerged as a promising technique for short-packet transmission in ultra-reliable low-latency communication scenarios. However, conventional SSC schemes often suffer from high encoding and decoding complexity due to the use of dense codebook matrices. In this paper, we propose a low-complexity SSC scheme by designing a sparse codebook structure, where each codeword contains only a small number of non-zero elements. The decoding is performed using the traditional multipath matching pursuit algorithm, and the overall complexity is significantly reduced by exploiting the sparsity of the codebook. Simulation results show that the proposed scheme achieves a favorable trade-off between BLER performance and computational complexity, and exhibits strong robustness across different transmission block lengths.
Why we are recommending this paper?
Due to your Interest in Low latency
This paper directly addresses ultra-reliable low latency communication, a key interest for the user. The focus on sparse superimposed coding tackles the complexity challenges inherent in achieving the desired performance characteristics.
Univ Grenoble Alpes
AI Insights - Pipeline Description Language (PDL): A language used to describe the design intent of a circuit, including signal declarations, connections, and operations between signals. (ML: 0.79)ππ
- Pipeline Automation Framework (PAF): A software framework for designing high-performance digital circuits using a declarative, intent-based approach. (ML: 0.77)ππ
- The PDL is used to describe the design intent of the circuit, including signal declarations, connections, and operations between signals. (ML: 0.74)ππ
- The framework consists of three main components: the Pipeline Description Language (PDL), the Config API, and the Model Resolution module. (ML: 0.74)ππ
- The Config API allows designers to parameterize the pipeline description to generate different circuits based on the same design intent. (ML: 0.73)ππ
- PAF provides a set of tools and algorithms for resolving the synchronization model obtained after pipeline elaboration into a balanced graph without any relation left unspecified. (ML: 0.72)ππ
- Config API: An application programming interface that allows designers to parameterize the pipeline description to generate different circuits based on the same design intent. (ML: 0.72)ππ
- The Pipeline Automation Framework (PAF) is a software framework that enables designers to create high-performance digital circuits using a declarative, intent-based approach. (ML: 0.70)ππ
- Model Resolution module: A component of PAF that uses algorithms to resolve the synchronization model into a balanced graph, taking into account parameters such as protocol signaling and extra signal propagation. (ML: 0.64)ππ
- The Model Resolution module uses a set of algorithms to resolve the synchronization model into a balanced graph, taking into account parameters such as protocol signaling and extra signal propagation. (ML: 0.61)ππ
Abstract
In a context of ever-growing worldwide communication traffic, cloud service providers aim at deploying scalable infrastructures to address heterogeneous needs. Part of the network infrastructure, FPGAs are tailored to guarantee low-latency and high-throughput packet processing. However, slowness of the hardware design process impairs FPGA ability to be part of an agile infrastructure under constant evolution, from incident response to long-term transformation. Deploying and maintaining network functionalities across a wide variety of FPGAs raises the need to fine-tune hardware designs for several FPGA targets. To address this issue, we introduce PAF, an open-source architectural parameterization framework based on a pipeline-oriented design methodology. PAF (Pipeline Automation Framework) implementation is based on Chisel, a Scala-embedded Hardware Construction Language (HCL), that we leverage to interface with circuit elaboration. Applied to industrial network packet classification systems, PAF demonstrates efficient parameterization abilities, enabling to reuse and optimize the same pipelined design on several FPGAs. In addition, PAF focuses the pipeline description on the architectural intent, incidentally reducing the number of lines of code to express complex functionalities. Finally, PAF confirms that automation does not imply any loss of tight control on the architecture by achieving on par performance and resource usage with equivalent exhaustively described implementations.
Why we are recommending this paper?
Due to your Interest in High throughput
Given the user's interest in high throughput, this paper's exploration of FPGA-based pipeline automation is highly relevant. The work aligns with the need for efficient packet processing systems.
Max Planck Institute for Security and Privacy
AI Insights - The study highlights the need for education and awareness-raising efforts to help individuals understand the risks and consequences of cybercrime. (ML: 0.98)ππ
- The study suggests that cybercrime can have long-lasting effects on individuals, with many experiencing lingering guilt, shame, and regret months after the incident. (ML: 0.97)ππ
- Participants reported varying levels of self-blame and external blame, with some feeling responsible for their own victimization and others blaming external parties. (ML: 0.97)ππ
- Participants reported varying levels of digital confidence and expertise, which influenced their ability to cope with cybercrime victimization. (ML: 0.97)ππ
- Participants reported a range of losses resulting from cybercrimes, including financial, personal, and psychological harm. (ML: 0.97)ππ
- Emotion-focused coping strategies, such as seeking understanding rather than blame, were used by participants to deal with negative emotions. (ML: 0.96)ππ
- Social support from friends, family members, and online communities played a crucial role in helping victims cope with negative emotions and address cyber threats. (ML: 0.96)ππ
- The study found that cybercrime can have a significant psychological toll on victims, with many experiencing negative emotions such as panic, stress, anxiety, fear, and anger. (ML: 0.96)ππ
- Problem-focused coping strategies, such as seeking technical assistance and reporting incidents to authorities, were also employed by participants to mitigate harm. (ML: 0.96)ππ
- The study highlights the importance of providing emotional support and resources to help individuals recover from cybercrime victimization. (ML: 0.95)ππ
Abstract
How do individuals recover from cybercrimes? Victims experience various types of harm after cybercrimes, including monetary loss, data breaches, negative emotions, and even psychological trauma. The aspects that support their recovery process and contribute to individual cyber resilience remain underinvestigated. To address this gap, we interviewed 18 cybercrime victims from Western Europe using a trauma-informed approach. We identified four common stages following victimization: recognition, coping, processing, and recovery. Participants adopted various strategies to mitigate the impact of cybercrime and used different indicators to describe recovery. While they mostly relied on social support and self-regulation for emotional coping, service providers largely determined whether victims were able to recover their money. Internal factors, external support, and context sensitivity collectively contribute to individuals' cyber resilience. We recommend trauma-informed support for cybercrime victims. Extending our conceptualization of individual cyber resilience, we propose collaborative and context-sensitive strategies to address the harmful impacts of cybercrime.
Why we are recommending this paper?
Due to your Interest in Resilience
This paper investigates resilience, a core interest, specifically in the context of cybercrime. Understanding the factors contributing to individual resilience is crucial for robust system design.
Cornell University
AI Insights - The problem statement is a mathematical proof of theorems related to queueing systems, specifically the comparison of two different methods for estimating the threshold in a queueing system. (ML: 0.89)ππ
- The solution is a rigorous mathematical proof that requires a deep understanding of queueing theory and its applications. (ML: 0.83)ππ
- The solution involves several lemmas and theorems that provide bounds on the difference between two functions D(i, k, β) and H(i, k, β). (ML: 0.81)ππ
- queueing system threshold estimation Bellman equation The problem statement is a complex mathematical proof that provides bounds on the difference between two functions in a queueing system. (ML: 0.80)ππ
- The proof relies heavily on mathematical induction and the use of Bellman equations to derive recursive relationships between the functions. (ML: 0.61)ππ
- The proof relies heavily on mathematical induction and the use of Bellman equations to derive recursive relationships between the functions. (ML: 0.61)ππ
- The solution involves several lemmas and theorems that provide insights into the behavior of the functions. (ML: 0.61)ππ
Abstract
We study a two-type server queueing system where flexible Type-I servers, upon their initial interaction with jobs, decide in real time whether to process them independently or in collaboration with dedicated Type-II servers. Independent processing begins immediately, as does collaborative service if a Type-II server is available. Otherwise, the job and its paired Type-I server wait in queue for collaboration. Type-I servers are non-preemptive and cannot engage with new jobs until their current job is completed.
We provide a complete characterization of the structural properties of the optimal policy for the clearing system. In particular, an optimal control is shown to follow a threshold structure based on the number of jobs in the queue before a Type-I first interaction and on the number of jobs in either independent or collaborative service.
We propose simple threshold heuristics, based on linear approximations, for real-time decision-making. In much of the parameter and state spaces, we establish theoretical bounds that compare the thresholds proposed by our heuristics to those of optimal policies and identify parameter configurations where these bounds are attained. Outside of these regions, the optimal thresholds are infinite. Numerical experiments further demonstrate the accuracy and robustness of our heuristics, particularly when the initial queue length is high. Our proposed heuristics achieve costs within 0.5% of the optimal policy on average and significantly outperform benchmark policies that exhibit extreme sensitivity to system parameters, sometimes incurring costs exceeding 100% of the optimal.
Why we are recommending this paper?
Due to your Interest in Distributed Systems
The paperβs focus on service queues and balancing independent and collaborative processing directly addresses the need for efficient resource utilization, a key aspect of high throughput systems.
Royal Holloway University of London
AI Insights - The goal is to find a mechanism that minimizes the social cost while ensuring truthfulness. (ML: 0.95)ππ
- It chooses the representatives of the groups according to the ordering of the agents therein. (ML: 0.94)ππ
- A counterexample is provided to show that the algorithm is not truthful. (ML: 0.92)ππ
- It partitions the agents into four sets based on their positions relative to the median representative, w_m. (ML: 0.92)ππ
- The example shows that an agent can decrease its cost by misreporting its position. (ML: 0.91)ππ
- The problem of designing a mechanism for facility location in the presence of misreporting agents is considered. (ML: 0.87)ππ
- The approximation ratio of Aβm is analyzed, and it is shown that the algorithm achieves an upper bound of 7/2 on the approximation ratio in the max variant for odd number m of groups. (ML: 0.80)ππ
- The analysis of Aβm follows along the lines of that in Section 3, and it considers instances with symmetric groups. (ML: 0.76)ππ
- In Case 2, where med β [w_β, wm), it is shown that SC(w) β€ X iβSβ d(i,med) + (n βm+nmr+nr)d(med, w β) + (n β+nβm)d(w β,wm). (ML: 0.75)ππ
- The social cost of the solution output by Aβm is compared to the optimal social cost, and an upper bound on the approximation ratio is derived for each case depending on the position of med with respect to w_β, w_m, and w_r. (ML: 0.75)ππ
- In Case 3, where med β [w_m, wr], it is shown that SC(w) β€ X iβSβ d(i,med) + (n βm+nmr+nr)d(med, w β) + (n β+nβm)d(w β,wm). (ML: 0.74)ππ
- A new algorithm, called Aβm, is proposed for this problem. (ML: 0.74)ππ
- The approximation ratio of Aβm is analyzed for each case, and it is shown that the algorithm achieves an upper bound of 7/2 on the approximation ratio in all cases. (ML: 0.73)ππ
- In Case 1, where med < w_β, it is shown that SC(w) β€ X iβSβ d(i,med) + n β d(med, w β) + (n β+nβm)d(w β,wm) + X iβSmr βͺ Srcosti(w). (ML: 0.72)ππ
Abstract
We study a distributed facility location problem in which a set of agents, each with a private position on the real line, is partitioned into a collection of fixed, disjoint groups. The goal is to open $k$ facilities at locations chosen from the set of positions reported by the agents. This decision is made by mechanisms that operate in two phases. In Phase 1, each group selects the position of one of its agents to serve as the group's representative location. In Phase 2, $k$ representatives are chosen as facility locations. Once the facility locations are determined, each agent incurs an individual cost, defined either as the sum of its distances to all facilities (sum-variant) or as the distance to its farthest facility (max-variant). We focus on the class of strategyproof mechanisms, which preclude the agents from benefiting through strategic misreporting, and establish tight bounds on the approximation ratio with respect to the social cost (the total individual agent cost) in both variants.
Why we are recommending this paper?
Due to your Interest in Distributed Systems
This research explores distributed systems with agents, aligning with the user's interest in resilient and scalable network architectures. The facility location problem provides a framework for understanding complex system design.
East China University of Science and Technology
AI Insights - Entropy: A measure of the amount of uncertainty or randomness in a system. (ML: 0.91)ππ
- Resilience: The ability of a system to withstand and recover from disruptions or disturbances. (ML: 0.90)ππ
- Entropy-based resilience theory provides a tractable analytical tool for studying trade-offs between performance efficiency and structural redundancy in networked systems. (ML: 0.88)ππ
- The framework can be extended by incorporating capacity constraints, cost functions, stochastic demand, and dynamic adaptation mechanisms to further bridge entropy-based resilience theory and practical network optimization problems. (ML: 0.85)ππ
- Networked systems: Complex systems composed of interconnected components that interact with each other. (ML: 0.83)ππ
- The entropy-based framework offers quantitative guidance for designing large-scale networked systems under robustness constraints, which is central to supply chain management, transportation planning, and infrastructure design. (ML: 0.82)ππ
- Entropy-based resilience theory provides a powerful tool for analyzing and designing complex networked systems under robustness constraints. (ML: 0.81)ππ
- Supply chain management: The process of planning, coordinating, and controlling the flow of goods, services, and information from raw materials to end customers. (ML: 0.79)ππ
- The dominant links scale inversely with network size, while background links exhibit quadratic decay with logarithmic corrections. (ML: 0.77)ππ
- High-throughput backbone connections coexist with sparse redundancy channels that collectively enhance system resilience. (ML: 0.69)ππ
Abstract
This study investigates the mathematical existence and asymptotic properties of Ulanowicz's structural resilience in complex systems such as supply chain networks. While ecological evidence suggests that sustainable systems gravitate toward an optimal state at $Ξ±= 1/\mathrm{e}$, the universality of this configuration in generalized networks remains theoretically unverified. We prove that while optimal resilience is unattainable in two-node networks due to structural over-determinacy, it exists for any directed graph with $N_\mathcal{V} \geq 3$. By constructing a symmetric network model with three types of link weights $(x, y, z)$ and uniform marginal distributions, we derive the governing equations for the optimal resilience configuration. Our analytical and numerical results reveal that as the network size $N_\mathcal{V}$ increases, the link weights required to maintain optimal resilience exhibit a power-law scaling behavior: the adjacent links scale as $O(N_\mathcal{V}^{-1})$, while the non-adjacent links scale as $O(N_\mathcal{V}^{-2})$, both accompanied by specific logarithmic corrections. This work establishes a rigorous mathematical foundation for the optimal resilience framework and provides a unified perspective on how entropy-based principles govern the robustness and evolution of large-scale complex networks, which may offer quantitative guidance for designing large-scale networked systems under robustness constraints.
Why we are recommending this paper?
Due to your Interest in Resilience