Just Accepted
Display Method:
, Available online , doi: 10.11884/HPLPB202638.250362
Abstract:
Background Purpose Methods Results Conclusions
The rapid development of high-power microwave application technology presents significant challenges for the reliability and installability of pulsed power drivers.
The design methodology of a compact, lightweight Tesla-type pulsed power driver based on high-energy-density liquid dielectric Midel 7131 and a dual-width pulse-forming line (PFL) is introduced.
There was a key breakthrough in the miniaturization of the integrated Tesla transformer and PFL assembly. Through optimization of the electrical length of the short pulse transmission line and its impedance matching characteristics, longstanding challenges associated with conventional single-cylinder PFLs and extended transmission lines using transformer oil dielectrics have been effectively resolved. A high-elevation, high-vacuum oil impregnation technique was developed for the Tesla transformer, successfully mitigating partial discharge in oil-paper insulation systems and thereby enhancing the power rating and operational reliability of the PFL.
The developed pulsed power driver delivers a peak output power of 20 GW, a pulse duration of 50 ns, a pulse flat-top fluctuation of less than 2%, and a maximum repetition rate of 50 Hz. The system has demonstrated stable operation over continuous one-minute durations, accumulating approximately 200 000 pulses with consistent performance. The driver’s overall dimensions are 4.0 m(L)×1.5 m (W)×1.5 m (H), with a total mass of approximately 5 metric tons.
Compared to the conventional 20 GW Tesla-type pulsed power generator, this driver has achieved significant improvements in power density and miniaturization.
, Available online , doi: 10.11884/HPLPB202638.250184
Abstract:
Background Purpose Methods Results Conclusions
The output switch is an essential part of the electromagnetic pulse simulator, and the switch gap directly affects the waveform characteristics of the electric field generated by the simulator. The single-polarity electromagnetic pulse simulator can adjust the switch gap by an external motor, but the bipolar electromagnetic pulse simulator cannot use the method due to the influence of mechanical structure and high voltage insulation.
This study aims to investigate a gas-driven method to achieve precise regulation of the switch gap in a bipolar electromagnetic pulse simulator.
Firstly, the basic structure of the gas remote adjustment system is proposed, which takes the cylinder as the actuator and connects with the outer cavity body through air pipe. Secondly, based on the structure, the mathematical model of the switch gap adjustment system is established. Thirdly, in view of the disadvantage of slow gas driving response, a switch gap control method combining trajectory planning and PIDA control method is proposed; Finally, the effectiveness of this method is verified by using Matlab simulation software.
Simulation results of the whole regulation process can be seen that when the switch gap is moved from 0 mm to the desired 30 mm, the process tracking error of the switch gap is less than 3.5 mm, and the final error is less than 0.5 mm.
This paper proposes a gas-driven switch gap adjustment method,which can achieve fast and accurate adjustment of the switch electrode gap, and a single adjustment can be within 200s, with an adjustment error of less than 0.5 mm. This method is of great significance for the engineering construction of electromagnetic pulse simulators.
, Available online , doi: 10.11884/HPLPB202638.250181
Abstract:
Background Purpose Methods Results Conclusions
The surface flashover in SF6 under nanosecond pulses involves complex physical processes, and accurately predicting the surface flashover voltage of insulating media in such environments constitutes a critical challenge for the design of high-voltage pulsed power equipment and the evaluation of insulation reliability. Compared with traditional AC or DC voltages, the extremely short rise time and high amplitude of nanosecond pulses lead to significant space charge effects and distinct discharge development mechanisms, thereby posing severe challenges to prediction models based on classical theories. In recent years, with the rapid improvement of computer computing power and breakthroughs in artificial intelligence algorithms, data-driven machine learning methods have demonstrated great potential in solving complex nonlinear insulation problems.
Targeting this specific challenge under nanosecond pulses, this paper selects four algorithms, including support vector machine (SVM), multi-layer perceptron (MLP), random forest (RF), and extreme gradient boosting (XGBoost), to train and predict flashover voltage data under different experimental conditions within the multi-scale distance range of 15 mm to 500 mm.
First, external operating conditions such as electric field distribution, voltage waveform, and gas pressure were parametrically extracted and characterized. The Pearson correlation coefficient was employed to conduct a correlation analysis on the aforementioned characteristic parameters, and ultimately 22 feature quantities were screened out as the model inputs. Subsequently, the Bayes hyperparameter optimization algorithm was utilized to perform hyperparameter optimization for four types of algorithms, and the 10-fold cross-validation method was adopted to select the optimal hyperparameter combination for each algorithm. After that, the sample training set was input into the four algorithms for training, and each algorithm was validated on the test set.
The four algorithms demonstrated overall good performance. Among them, Random Forest (RF) and XGBoost exhibited excellent performance on the training set but poor performance on the validation set, which is likely a manifestation of overfitting in ensemble learning and indicates weak generalization ability. Support Vector Machine (SVM) achieved relatively outstanding performance on both the training set and the validation set. Furthermore, the generalization performance of the SVM and XGBoost algorithms was validated using data outside the sample dataset. The results showed that SVM yielded better prediction outcomes on the data outside the sample dataset.
SVM achieved high prediction accuracy on the training set, test set, and data outside the sample dataset, making it more suitable for the insulation design of electromagnetic pulse simulation devices.
, Available online , doi: 10.11884/HPLPB202638.250248
Abstract:
Background Purpose Methods Results Conclusions
Pulse step modulation (PSM) high-voltage power supply is widely used in the heating systems of the Experimental Advanced Superconducting Tokamak (EAST). This power supply adopts a modular topology, where the high output voltage is generated by superimposing the outputs of multiple independent DC power modules. In conventional designs, input over-voltage and under-voltage protection for each power module is achieved by installing individual voltage sensors across the input capacitors.
However, this method requires a large number of voltage sensors, which significantly increases system monitoring costs and complicates the hardware detection circuitry. To address these limitations, this study aims to develop a sensorless voltage measurement (SVM) method capable of estimating the input voltage of each power module using only a single voltage sensor on the output side.
This paper first introduces the circuit topology of the PSM high-voltage power supply and provides a detailed analysis of its control strategy. Building on this foundation, a novel sensorless voltage detection technique is proposed to estimate the input voltage of each power module. The method utilizes only one voltage sensor installed at the output side of the PSM high-voltage power supply to collect voltage signals, from which the input voltages of individual modules are derived through algorithmic processing.
To validate the proposed method, a model was constructed and tested based on the RT-LAB real-time simulation platform. Experimental results demonstrate that the SVM technique can effectively estimate input voltages, thereby confirming the feasibility of the proposed method.
The study concludes that the SVM method not only reduces the number of required sensors and associated costs but also simplifies the system architecture while maintaining reliable module-level voltage monitoring. The findings provide valuable insights for the design of modular power supplies in large-scale experimental setups and suggest potential applications in other multi-module power electronic systems.
, Available online , doi: 10.11884/HPLPB202638.250204
Abstract:
Background Purpose Methods Results Conclusions
The PFN-Marx pulse driver with millisecond charging holds significant potential for achieving lightweight and miniaturized systems. To ensure its long-life, stable, and reliable operation, the development of a triggered gas gap switch represents a key technological challenge.
This study aims to address issues related to the large dispersion in operating voltage and rapid erosion of the trigger electrode under millisecond charging conditions.
Based on the operating mechanism of the corona-stabilized switch, a corona-based gas-triggered switch was developed. Investigations were conducted on its structural design, electrostatic field simulation, trigger source development, operational voltage range, time delay, and jitter characteristics. These efforts resolved the problem of frequent self-breakdown or trigger failure under millisecond charging.
Experimental results demonstrate that, using SF6 as the working gas at a pressure of 0.6 MPa, the maximum operating voltage of the triggered switch reaches 90 kV. Under conditions of 84 kV operating voltage, 20 Hz repetition frequency, 500 pulses per burst, and without gas replacement, the switch was tested continuously for 100,000 pulses. Only one self-breakdown incident occurred during this period, resulting in a self-breakdown rate of less than 0.01‰.
The triggered switch developed in this study meets the design requirements and effectively resolves the instability issues under millisecond charging conditions, thereby providing a foundation for future engineering applications.
, Available online , doi: 10.11884/HPLPB202638.250018
Abstract:
Background Purpose Methods Results Conclusions
Field-programmable gate array (FPGA)-based time-to-digital converters (TDCs) have been extensively employed for high-precision time interval measurements, in which picosecond-level resolution is often required. Among existing approaches, the delay-line method remains widely used, while the system clock frequency and the delay chain design are recognized as the primary factors affecting resolution and linearity.
The objective of this study is to develop a multi-channel FPGA-TDC architecture that integrates multiphase clocking with delay-line interpolation, thereby lowering the operating frequency, improving linearity, and reducing hardware resource utilization, while maintaining high measurement resolution.
A two-stage interpolation scheme was introduced, where fine time measurement cells were implemented through the combination of multiphase clocks and shortened delay chains. This configuration mitigates the accumulation of nonlinearity in the delay elements and reduces the scale of thermometer-to-binary encoders, resulting in decreased logic overhead. The proposed TDC was implemented on a Xilinx ZYNQ-7035 device, and its performance was evaluated within a measurement range of 0–16000 ps.
The experimental evaluation demonstrated that a time resolution better than 4 ps was achieved. The measured differential nonlinearity (DNL) was in the range of −1 least significant bit (LSB) to +7 LSB, while the integral nonlinearity (INL) ranged from −2 LSB to +14 LSB. Compared with conventional architectures, the proposed scheme shortens the delay chain length by several times at the same operating frequency, and achieves lower frequency with the same chain length.
The proposed two-stage interpolation architecture not only enhances resolution and linearity but also significantly reduces logic resource consumption, demonstrating strong application potential.
, Available online , doi: 10.11884/HPLPB202638.250123
Abstract:
Background Purpose Methods Results Conclusions
Owing to its unique miniaturized structure, real-time frequency tuning capability, and broad-spectrum microwave output characteristics, the gyromagnetic nonlinear transmission line (GNLTL) exhibits considerable application potential in the development of small-scale solid-state high-power microwave sources. This has driven the need for in-depth exploration of its circuit characteristics and parameter influences to optimize its performance.
This study aims to derive the analytical expression of solitons in the GNLTL equivalent circuit, construct a reliable equivalent circuit model of GNLTL, and systematically clarify the influence mechanism of key circuit parameters on its output characteristics.
Firstly, the analytical expression of solitons in the GNLTL equivalent circuit was obtained through theoretical deduction. Secondly, an equivalent circuit model of GNLTL was established using circuit simulation methods. Finally, the influence mechanism of key circuit parameters on the output characteristics of GNLTL was systematically investigated based on the constructed model.
The results show that the saturation current and initial inductance of the nonlinear inductor have a decisive effect on the nonlinear characteristics of the circuit: when these two parameters are small, the leading edge of the output pulse is not fully steepened and is accompanied by oscillating waveforms; increasing them improves the steepening degree of the pulse leading edge, indicating a positive correlation between these two parameters and circuit nonlinearity. Additionally, enhanced nonlinearity of the equivalent circuit leads to a decrease in output frequency; saturation current, saturation inductance, initial inductance, and capacitance per stage all show a negative correlation with the output microwave frequency.
The findings of this study clarify the relationship between key circuit parameters and the nonlinear characteristics as well as output frequency of GNLTL, thereby providing theoretical and simulation references for the design and performance analysis of gyromagnetic nonlinear transmission lines.
, Available online , doi: 10.11884/HPLPB202638.250155
Abstract:
Background Purpose Methods Results Conclusions
Currently, the bias power supplies in high-voltage electron beam welders, both domestically and internationally, are suspended at a negative high voltage. The output voltage regulation is achieved by sampling the operating current in the high-voltage power circuit. The sampled current signal undergoes multi-stage conversion before being sent to the bias power supply, which then adjusts its output voltage based on the feedback current. This adjusted output voltage, in turn, alters the current in the high-voltage circuit. Since the bias power supply is an inverter-based power source, its response and adjustment cycles are relatively long, and precise step-wise regulation is challenging. Consequently, this leads to significant beam current ripple, poor stability, and inadequate beam current reproducibility, failing to meet the requirements of precision welding for beam current stability and low fluctuation.
This paper aims to develop a bias power supply with an adjustable DC output voltage ranging from −100 V to −2 kV, featuring low voltage ripple and high voltage stability. The bias power supply can be connected in series within the high-voltage circuit, enabling rapid adjustment and precise control of the operating beam current through a fast closed-loop feedback control system. Additionally, the bias power supply must operate reliably during load arcing of the electron gun.
The design incorporates absorption and protection methods to address the issue of electron gun load arcing damaging the bias power supply. By connecting the bias power supply in series within the high-voltage circuit and feeding back the operating current in the bias power supply loop, the output voltage (bias cup voltage) is adjusted. The bias cup voltage adaptively regulates according to the beam current magnitude, achieving real-time rapid tracking and fine control of the operating beam current.
A bias power supply was developed with an adjustable DC output voltage from −100 V to −2 kV, featuring a ripple voltage of ≤0.1% across the entire voltage range, voltage stability better than 0.1%, and an output current greater than 3 mA. When applied to a −150 kV/33 mA high-voltage electron beam welder, it achieved a beam current ripple of ±0.19%, beam current stability better than ±5 μA, and beam current reproducibility of ±0.04%.
Based on the methods of absorption, protection, and adaptive regulation of the bias cup voltage according to the beam current magnitude, a novel bias power supply for high-voltage electron beam welders has been successfully developed. This solution addresses the issues of large beam current ripple, poor stability, and inadequate reproducibility in high-voltage electron beam welding, providing an effective approach for high-stability, precision-controllable welding.
, Available online , doi: 10.11884/HPLPB202638.250049
Abstract:
Background Purpose Methods Results Conclusions
The motion and trapping of high-energy charged particles in the radiation belts are significantly influenced by the structure of Earth's magnetic field. Utilizing different geomagnetic models in simulations can lead to varying understandings of particle loss mechanisms in artificial radiation belts.
This study aims to simulate and compare the trajectories and loss processes of 10 MeV electrons injected at different longitudes and L-values under the centered dipole, eccentric dipole, and International Geomagnetic Reference Field (IGRF) models, to elucidate the influence of geomagnetic field models on particle trapping and loss, particularly within the South Atlantic Anomaly (SAA) region.
The particle loss processes during injection were simulated using the MAGNETOCOSMIC program within the Geant4 Monte Carlo software. Simulations were conducted for 10 MeV electrons at various longitudes and L-values. The trajectories, loss cone angles, and trapping conditions were analyzed and compared among the three geomagnetic models.
The centered dipole model yielded relatively regular and symmetric electron drift trajectories. asymmetry was observed in the eccentric dipole model. The IGRF model produced the most complex and irregular trajectories, best reflecting the actual variability of Earth's magnetic field. Regarding the relationship between loss cone angle and L-value, the IGRF model exhibited the largest loss cone angles, indicating the most stringent conditions for particle trapping. Furthermore, injection longitude significantly influenced loss processes, with electrons approaching the center of the SAA being most susceptible to drift loss.
The choice of geomagnetic model critically impacts the simulation of particle dynamics in artificial radiation belts. The IGRF model, offering the most detailed field representation, predicts the strictest trapping conditions and most realistic loss patterns, especially within the SAA. These findings enhance the understanding of particle trapping mechanisms and are significant for space environment research and applications.
, Available online , doi: 10.11884/HPLPB202638.250067
Abstract:
Background Purpose Methods Results Conclusions
Neutron nuclear data are crucial for fundamental research in nuclear physics, providing essential information for nuclear science and engineering applications. Advanced high-current accelerator neutron sources serve as the foundation for nuclear data measurements. The neutron converter target is a key component of such high-current accelerator neutron sources. Under intense particle beam bombardment, the heat dissipation of the neutron converter target is a critical factor limiting the neutron yield and operational stability.
This study aims to address the insufficient heat dissipation capacity of traditional gas targets by designing a novel dynamic gas target system. By optimizing the structure of the gas target chamber to form an active cooling circulation loop, it seeks to solve the cooling problem within the confined space of the gas target chamber.
First, a conceptual design of the gas target system and chamber structure was conducted. The Target software was then used to analyze the energy straggling of incident ions caused by the metal window and the gas itself. Numerical simulations of the thermal environment inside the gas target chamber were performed. The heat source was dynamically loaded based on gas density by coupling with SRIM calculations of the heating power. The gas flow patterns within the target chamber under different beam currents and inlet velocities were analyzed.
The energy straggling calculations show that the contribution from the gas is very small, with the metal window being the primary source of energy straggling for incident ions. The simulation results indicate that as the beam current increases, the heating power rises gradually, while the density in the heated region decreases rapidly. Increasing the inlet flow velocity enhances the heat dissipation capacity and reduces the density drop effect caused by beam heating.
The comprehensive performance evaluation demonstrates that this dynamic gas target system can achieve a neutron yield of up to 5.2×1012 n/s at a beam current of 10 mA. The results prove that the novel dynamic gas target system effectively improves heat dissipation performance, contributes to obtaining a higher neutron yield, and ensures operational stability under high-current application scenarios.
, Available online , doi: 10.11884/HPLPB202638.250112
Abstract:
Background Purpose Methods Results Conclusions
Envelope instabilities and halo formation are critical challenges limiting beam quality in space-charge-dominated beams of low-energy superconducting proton linear accelerators. The dynamic evolution of focusing parameters during acceleration and the intrinsic role of double-period focusing structures in the low-energy region in these phenomena remain insufficiently explored.
This study aims to systematically investigate the influence of dynamically evolving focusing parameters on envelope instabilities, reveal the relationship between double-period focusing structures and halo formation, and achieve localized breakthroughs of the zero-current phase advance σ0 beyond 90° while optimizing beam quality.
A theoretical model was established via the second-order even-mode expansion of the Vlasov–Poisson equations. Multiple evolution schemes were designed, and multi-particle simulations were performed on low-energy proton beams (normalized RMS emittance: 0.2–0.4 π·mm·mrad). The particle–core model was used to compare halo formation mechanisms between quasi-periodic and double-period structures, with two-dimensional and three-dimensional models verifying key findings.
For weak space-charge effects (high η), σ0 can exceed 90° without degrading beam quality; strong space-charge effects (low η) induce resonances and emittance growth, especially in doublet structures. Double-period structures cause envelope instability even with σ0 < 90° per cell, being more prone to halo formation via the 2∶1 resonance. Longitudinal beam size variations alter core charge density (a new halo mechanism), and higher-order resonances contribute significantly. The number of short-period cells (N) correlates inversely with resonance probability.
Dynamic focusing parameters and double-period structures strongly affect envelope instabilities and halo formation. The 2∶1 resonance and longitudinal-transverse coupling are key halo mechanisms. σ0 breakthrough beyond 90° is feasible under weak space-charge conditions, and increasing N reduces resonance risk. These findings provide theoretical and numerical support for beam quality optimization in low-energy superconducting proton linac.
, Available online , doi: 10.11884/HPLPB202537.250038
Abstract:
Background Purpose Methods Results Conclusions
To enhance the performance of the next-generation X-ray free electron laser (XFEL), a photocathode RF gun capable of providing the required high-quality electron beam with a small emittance has been a significant research objective. In comparison to the conventional L-band or S-band RF gun, the C-band RF gun features a higher acceleration gradient above 150 MV/m and the ability to generate a small-emittance beam. Low-emittance electron beams are critical for enhancing XFEL coherence and brightness, driving demand for advanced RF gun designs. For a bunch charge of 100 pC, a normalized emittance of less than 0.2 mm.mrad has been expected at the gun exit.
This paper presents the design of an emittance measurement device, which can accurately measure such a small emittance at the C-band RF gun exit to ensure beam quality for XFEL applications.
To achieve the desired accuracy, the primary parameters —slit width, slit thickness, and beamlet-drift length—have been systematically optimized through numerical simulations using Astra and Python based on the single-slit-scan method. Dynamic errors, including motor displacement and imaging resolution, were quantified to ensure measurement reliability.
The evaluations indicate that the measurement error of 95% emittance is less than 5%, employing a slit width of 5 μm, a slit thickness of 1 mm, and a beamlet-drift length of 0.11 m under dynamic conditions.
This optimized emittance measurement device supports precise beam quality characterization for XFELs, offering potential for further advancements in electron beam diagnostics.
, Available online , doi: 10.11884/HPLPB202638.250252
Abstract:
Background Purpose Methods Results Conclusions
High-power femtosecond fiber lasers are essential tools for advanced applications in ultrafast science, precision manufacturing, and nonlinear optics. However, achieving hundred-watt-level output while maintaining high beam quality and short pulse duration remains challenging due to nonlinear effects and transverse mode instabilities.
This work aims to develop a high-power femtosecond fiber laser system based on chirped-pulse amplification (CPA), using rod-type photonic crystal fiber as the gain medium, to achieve hundred-watt-level output with high efficiency and stable beam quality.
The system adopts a rod-type photonic crystal fiber as the main amplifier. Backward pumping combined with double-pass amplification in a single rod fiber is implemented to enhance pump-to-signal conversion efficiency. Nonlinear effects are mitigated by employing a large mode area fiber, short gain length, and proper chirped-pulse management. A double-grating compressor is used for final pulse compression.
The amplifier achieves a pump-to-signal conversion efficiency exceeding 60%. The system delivers pulses with a central wavelength of 1033 nm, a repetition rate of 1 MHz, a single-pulse energy of 162 μJ, and a pulse duration of 233 fs. The output beam ellipticity is better than 95%. The overall pump-to-compressed-signal efficiency reaches 54%.
The demonstrated system achieves high repetition rate, high average power, and ultrashort pulse duration simultaneously, providing a novel and practical scheme for hundred-watt-level femtosecond fiber lasers. This approach offers new opportunities for applications requiring stable, high-brightness ultrafast sources.
, Available online , doi: 10.11884/HPLPB202638.250070
Abstract:
Background Purpose Methods Results Conclusions
Optical manipulation based on integer-order vortex beams is widely used in nanotechnology, yet their discrete nature restricts continuous and precise transverse control of nanoparticles.
This study aims to overcome this limitation by proposing a novel approach using fractional-order vortex beams (FVBs), with the goal of achieving continuous and precise transverse optical trapping and manipulation of nanoparticles.
We developed a vector diffraction model to characterize the focal field of FVBs, revealing it as a coherent superposition of integer-order modes with a highly asymmetric weight distribution. Additionally, an optical force model was established to analyze the trapping behavior of spherical nanoparticles. Theoretical calculations and Langevin dynamics simulations were employed to evaluate the three-dimensional trapping stability and multi-degree-of-freedom manipulation capability.
The transverse trapping position exhibits a linear dependence on the fractional topological charge. By continuously tuning the topological charge, nanoparticles can be displaced precisely and continuously in the transverse plane with sub-wavelength accuracy—a capability not achievable with conventional integer-order vortex beams. Simulations further confirm the stability of the three-dimensional trap and the feasibility of coordinated multi-degree-of-freedom manipulation.
This work demonstrates that fractional-order vortex beams offer a superior alternative for high-precision optical manipulation. They provide a powerful and novel technique for applications in microfluidics, nanofabrication, and lab-on-a-chip devices.
, Available online , doi: 10.11884/HPLPB202638.250178
Abstract:
Background Purpose Methods Results Conclusions
Different applications require lasers of different wavelengths, and Raman laser is one of effective methods to expand spectral range of lasers. Raman lasers have advantages of high conversion efficiency, excellent beam quality, excellent scalability and wide range coverage etc. However, the cumbersome size of Raman cell (especially the long length of Raman cell) deteriorates the application of Raman laser. To reduce the length of Raman cell, a short-focus lens is required, and this would lead laser-induced breakdown (LIB).
To realize miniaturization of Raman laser devices while suppressing LIB, this work proposed a method to modulate the pump laser into a Bessel beam to achieve stimulated Raman frequency conversion using an axicon. The goal is to achieve high photon conversion efficiency (PCE) and beam quality in a compact system.
By the comparison of intensity at focus and depth of focus of f = 0.5 m focal lens and axicon, axicon with bottle angle of 2° could effectively reduce laser intensity at focus and increase the depth of focus. In this work, a pulsed 1064 nm laser was used as pump source, pressurized methane was used as Raman medium, and axicon with bottle angle of 2° was used to focus pump laser. Pressure of methane, pump laser divergence angles and diameter of pump beam were optimized to achieve the maximum conversion efficiency.
In 3.5 MPa methane and 366 mJ energy of 1064 nm pump laser, 128 mJ forward Raman laser at 1543 nm was generated; the corresponding photon conversion efficiency was 50.7%, and higher output energy and conversion efficiency were expected under higher pressure and at higher pump energy. By blocking the central rounded apex of the axicon, the Raman laser pulse energy of 97 mJ can still be retained with the beam quality β=2.19. An experiment verified that the Raman cell can be designed to be 0.4 m without damaging the window. Based on the results of multiple experiments, it can be inferred that the Raman cell can be further shortened to 0.3 m without sacrificing the conversion efficiency. By axially translating the axicon within an extended cell, the forward/backward Stokes light ratio became tunable.
This study demonstrates the viability of Bessel beams for compact, high-efficiency gaseous Raman lasers. The conical wavefront pumping strategy mitigate LIB risks and enable system miniaturization, offering a promising pathway for practical applications.
, Available online , doi: 10.11884/HPLPB202638.250171
Abstract:
Background Purpose Methods Results Conclusions
The use of high-power lasers for Wireless Power Transmission (WPT) in space-based solar power stations poses a potential risk to orbiting spacecraft. Misalignment or system failures could cause the laser beam to irradiate a spacecraft's solar array, potentially inducing discharge phenomena that threaten the spacecraft's safety. Existing research has primarily focused on the thermal damage effects of lasers on solar arrays, while studies on the characteristics of laser-induced discharge remain insufficient.
This study aims to systematically investigate the influence of two key laser parameters, energy and wavelength, on the discharge characteristics of spacecraft solar arrays. The goal is to reveal the underlying mechanisms of laser-induced discharge, thereby providing a theoretical and experimental basis for the safe application of high-power laser wireless energy transmission technology.
The mechanism of laser-induced solar array discharge was analyzed based on laser-induced plasma theory and discharge mechanisms within the Low Earth Orbit (LEO) plasma environment. Guided by this theoretical framework, the experimental parameters for the laser-induced spacecraft solar array discharge test were determined. The experiment analyzed the probability of discharge induced by a 532 nm laser at different energy levels and acquired discharge duration data. Probability-time distribution curves were established, and the probability functions for discharge duration under different laser energies were obtained by fitting with a double Poisson distribution. Furthermore, a comparative study was conducted on the peak discharge current and the duration probability functions induced by 532 nm and 266 nm wavelength lasers at the same energy level.
The experimental results demonstrate that higher laser energy leads to a greater probability of induced discharge and longer discharge durations. Shorter laser wavelengths result in a lower discharge threshold and induce discharge events with higher peak currents. The discharge risk parameter increases significantly with shorter wavelength and higher energy.
Laser energy and wavelength are critical factors affecting the discharge risk of solar arrays. Short-wavelength, high-energy lasers pose a greater threat to solar array safety. The findings of this study provide important guidance for selecting laser parameters in WPT systems and for designing protective measures for solar arrays.
, Available online , doi: 10.11884/HPLPB202537.250150
Abstract:
Background Purpose Methods Results Conclusions
High power GaN-based blue diode lasers have wide application prospects in industrial processing, copper material welding, 3D printing, underwater laser communication and other technical fields. The Chip On Submount (COS unit) packaged in the heat sink is a kind of single component that can be applied to the fabrication of high power GaN-based blue diode lasers. The device has the advantages of low thermal resistance and small size.
However, due to the low reliability of this device, the industrial application of this COS single component in high power GaN-based blue diode lasers is still limited to a certain extent, and its performance degradation factors need to be studied.
In this paper, based on the optical microscopy、scanning electron microscopy (SEM) and energy dispersive spectroscopy (EDS) the degradation factors of high power blue light COS components were studied.
Finally, the experimental study and analysis show that the performance degradation factors of blue light diode laser chip are mainly related to GaN matrix material defects, cavity surface surplus deposition and photochemical corrosion factors, and through experiments, it is compared that high power blue light COS single component can improve its reliability by hermetic packaging and provide a reference for the subsequent engineering application of high power blue COS units.
Finally, experimental research and analysis indicate that the performance degradation factors of high-power blue laser diodes (LDs) are primarily related to defects in the GaN substrate material, foreign matter deposition on the cavity surface, and photochemical corrosion factors. Comparative experiments further reveal that the threshold current growth rate of LDs with gas sealing (~0.14 mA/h) is lower than that of non-gas-sealed LDs (~0.27 mA/h). This demonstrates that gas-sealed packaging of high-power blue LD COS unit devices can enhance their reliability.
, Available online , doi: 10.11884/HPLPB202638.250257
Abstract:
Background Purpose Methods Results Conclusions
Space solar arrays, as a crucial part of satellite power systems, are essential for maintaining normal satellite operation. Their large surface area and complex insulation structure make them highly vulnerable to strong external electromagnetic fields. High-power microwaves (HPM), with their wide bandwidth, high power, and rapid action, can readily damage such structures. Therefore, investigating the HPM coupling effects on space solar arrays is of significant importance.
This study investigates the electric field coupling of space solar cell array samples under high-power microwave exposure.
Using a representative solar cell array structure and layout as a reference, this study constructs a three-dimensional model under high-power microwave irradiation and examines the coupling behavior of the array under varying excitation source parameters, including frequency, polarization direction, incidence angle and so on.
(1)Within the frequency range of 2–18 GHz, vertically polarized S-band microwave irradiation is most likely to induce discharge damage to the solar cell array, with the induced electric field at the triple junction in cell string gaps being much higher than that at interconnect gaps. (2) Under microwave irradiation, the solar cell samples exhibit intense transient electric fields; in the case of vertical polarization, the induced field is mainly concentrated in the cell string gaps, near the busbars, and along the cell edges. (3) The steady peak of the induced electric field at the triple junction decreases with increasing microwave incidence angle and increases with higher microwave power density. (4) The rise and fall times of the microwave pulse have no significant effect on the induced electric field magnitude. (5) The electric field in the space around the cell string gap gradually decreases from the gap center toward the outer region.
The findings of this study provide valuable references for the electromagnetic protection design of space solar cell arrays.
, Available online , doi: 10.11884/HPLPB202638.250256
Abstract:
Background Purpose Methods Results Conclusions
In the fields of high-power microwaves and pulse compression, compared with exponentially decaying microwave pulses, flat-top output has core advantages such as reducing the maximum transient surface field of the structure and enhancing system stability. Therefore, it has significant technical significance and application value.
The purpose of this study is to develop a new method for power doubling that generates a flat-topped output and to observe its benefits through simulation experiments.
The research analyzes the energy storage process and the power gain and flat-top output width after input inversion based on the scattering matrix theory, and conducts simulation experiments using CST.
The simulation experiment results show that its power gain is more than 5.7 times, the flat top width is 80 ns, the waveform is gentle, and the power capacity can reach 160 MW.
Compared with the existing technology, this design has a simple structure, compact volume and convenient processing and maintenance, providing a new solution for the stable output of high-power microwave energy and the research of two-stage pulse compression systems.
, Available online , doi: 10.11884/HPLPB202638.250182
Abstract:
Achieving high-efficiency and high-power operation under low magnetic fields is an important development trend for high-power microwave sources. In order to enhance the efficiency of high-power microwave source under low guiding magnetic fields, a high-efficiency coaxial dual-mode relativistic Cherenkov oscillator (RCO) under a low guiding magnetic field is proposed. The RCO works in both coaxial quasi-TEM mode and TM01 mode and realizes high-efficiency output in low magnetic field (<0.4 T). In particle-in-cell simulation, when the guiding magnetic field is only 0.35 T, the RCO achieves a microwave output of 3 GW with a beam-wave conversion efficiency of 40%. At the same time, aiming at the RF breakdown phenomenon in the experiment, the power capacity is improved by increasing the number of slow wave structure periods, which is verified by both simulation and experiment. In the experiment, under a magnetic field of 0.37 T, the output power is 2.85 GW with a pulse width of 57 ns and conversion efficiency of 34%. The experimental results obtained under the low magnetic field provide strong support for the development of miniaturization of high-power microwave systems.
Achieving high-efficiency and high-power operation under low magnetic fields is an important development trend for high-power microwave sources. In order to enhance the efficiency of high-power microwave source under low guiding magnetic fields, a high-efficiency coaxial dual-mode relativistic Cherenkov oscillator (RCO) under a low guiding magnetic field is proposed. The RCO works in both coaxial quasi-TEM mode and TM01 mode and realizes high-efficiency output in low magnetic field (<0.4 T). In particle-in-cell simulation, when the guiding magnetic field is only 0.35 T, the RCO achieves a microwave output of 3 GW with a beam-wave conversion efficiency of 40%. At the same time, aiming at the RF breakdown phenomenon in the experiment, the power capacity is improved by increasing the number of slow wave structure periods, which is verified by both simulation and experiment. In the experiment, under a magnetic field of 0.37 T, the output power is 2.85 GW with a pulse width of 57 ns and conversion efficiency of 34%. The experimental results obtained under the low magnetic field provide strong support for the development of miniaturization of high-power microwave systems.
, Available online , doi: 10.11884/HPLPB202638.250176
Abstract:
Background Purpose Methods Results Conclusions
Global Navigation Satellite System (GNSS) compatible receiver antennas—integrating multiple global navigation constellations—feature more complex front-door radio frequency (RF) channel architectures than single-constellation GPS antennas. High power microwave (HPM) effect research on GNSS compatible antennas with complex RF front-end were rarely seen.
To investigate the GNSS compatible antenna HPM effects, radiation experiments on a type of GNSS-compatible receiver antenna were carried out, a customized characterization approach was designed to analyze the damaged antennas and identify the specific failed components within the complex RF front-end.
The RF front-end structure of the antenna was analyzed, revealing a design with two separate RF channels (around 1.25 GHz and 1.6 GHz), each with a dedicated first-stage low-noise amplifier (LNA), followed by shared second and third-stage LNAs. The performance of these components was characterized employing a customized “hot measurement” setup, which using a vector network analyzer incorporating a test antenna and a DC blocker.
The measurements pinpointed the failure to the first-stage LNA (Q6) of the RF channel corresponding to the HPM source frequency of 1.6 GHz. This specific component showed significant degradation or complete failure. In contrast, the first-stage LNA (Q4) of the other channel (~1.25 GHz) and the shared subsequent amplifier stages (Q2 and Q1) remained unaffected. The root cause was confirmed by replacing the damaged Q6 LNA, which successfully restored the antenna’s full functionality.
This work demonstrates that in a multi-channel RF front-end, HPM effects can be highly localized, selectively damaging the first-stage amplifier of the channel cover the HPM frequency while sparing other sections. The findings provide valuable insights into the HPM vulnerability of complex RF systems and offer a reference methodology for related effect analysis.
, Available online , doi: 10.11884/HPLPB202638.250209
Abstract:
Background Purpose Methods Results Conclusions
Gyrotron traveling wave tube (Gyro-TWT) is a vacuum electronic device with broad application prospects. Magnetron injection gun (MIG) is one of the core components of gyro-TWT, and its performance directly determines the success or failure of gyro-TWT. From the current research results on MIGs at home and abroad, it can be seen that the working voltage and current of existing MIGs are mostly low, and the velocity spread is generally high, which cannot meet the requirements of future megawatt-class gyro-TWT for MIG.
In order to meet the requirement for MIG with high voltage, high current, and low electron beam velocity spread in the development of megawatt-class high-power gyro-TWT, this paper presents a novel design scheme for a single anode electron gun.
The novel electron gun scheme introduces a curved cathode structure to reduce the velocity spread of the electron beam, while effectively increasing the cathode emission band area and reducing the cathode emission density.
The results of PIC simulation show that under the working conditions of 115 kV and 43 A, the designed electron gun has a transverse to longitudinal velocity ratio of 1.05, a velocity spread of 1.63%, and a guiding center radius of 3.41 mm. The thermal analysis results indicate that the MIG can heat the cathode to 1050 ℃ at a power of 76 W.
The simulation and thermal analysis results indicate that the designed MIG meets the design expectations and satisfies the requirements of high voltage, high current, and low electron beam velocity spread for megawatt level gyro-TWT.
, Available online , doi: 10.11884/HPLPB202638.250174
Abstract:
Background Purpose Methods Results Conclusions
Accurate identification of radionuclides is the key to improving the level of radioactivity monitoring.
To further enhance the performance of radionuclide identification, a method combining Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) for radionuclide identification has been studied.
Gamma-ray spectra data of eight single and mixed radioactive nuclides were collected using a sodium iodide spectrometer, and a large number of gamma-ray spectral training data were generated by calculating the probability density of gamma photons at different energy levels and using random sampling methods, followed by normalization of the data. The CNN was then used to extract feature vectors from the input spectral data, and these extracted feature vectors were fed into the RNN for training, with the final radionuclide classification results being output by the activation function.
To verify the accuracy of the CNN-RNN method in identifying radionuclides, a comparative analysis was conducted with the radionuclide identification method based on Convolutional Neural Network (CNN) and Long Short-Term Memory Neural Network (LSTM), and the results showed that the LSTM spectral model achieved a recognition accuracy rate of over 97.5% for single nuclides and over 92.31% for mixed nuclides on the test set, while the CNN and CNN-RNN spectral models achieved a recognition accuracy rate of 100% for single nuclides and recognition rates of over 92.95% and 97.44% for mixed nuclides.
respectively, indicating that the CNN-RNN method performs better in gamma-ray spectral identification of radioactive nuclides, Compared with neural network models trained only on real - measured data, incorporating augmented data can improve the training efficiency and generalization ability of the models.
, Available online , doi: 10.11884/HPLPB202638.250238
Abstract:
Background Purpose Method Results Conclusions
Accurately simulating the gas-solid coupled heat transfer in high-temperature pebble-bed reactors is challenging due to the complex configuration involving tens of thousands of fuel pebbles. Conventional unresolved CFD-DEM methods are limited in accuracy by their requirement for coarse fluid grids, whereas fully resolved simulations are often prohibitively expensive.
This study aims to develop a semi-resolved function model suitable for fine fluid grids to enable accurate and efficient coupled thermal-fluid simulation in pebble beds.
A Gaussian kernel-based semi-resolved function was introduced to smooth physical properties around particles and compute interphase forces via weighted averaging. The key parameter, the dimensionless diffusion time, was optimized through comparison with Voronoi cell analysis. The model was implemented in an open-source CFD-DEM framework and validated against both a single-particle settling case and a fluidized bed experiment.
Voronoi cell analysis determined the optimal diffusion time to be 0.6. Exceeding this value over-smoothens the spatial distribution and obscures local bed features. The single particle settling case demonstrated excellent agreement with experimental terminal velocities under various viscosities. The fluidized bed simulation successfully captured porosity distribution and the relationship between fluid velocity and particle density, consistent with experimental data. Application to HTR-10 pebble bed thermal-hydraulics showed temperature distributions aligning well with the SA-VSOP benchmark.
The proposed semi-resolved function model effectively overcomes the grid size limitation of traditional CFD-DEM, accurately capturing interphase forces in sub-particle-scale grids. It provides a high-precision and computationally viable scheme for detailed thermal-fluid analysis in advanced pebble-bed reactors.
, Available online , doi: 10.11884/HPLPB202638.250243
Abstract:
Background Purpose Methods Results Conclusions
The traditional Monte-Carlo (MC) method faces an inherent trade-off between geometric modeling accuracy and computational efficiency when addressing real-world irregular terrain modeling.
This paper proposes a fast MC particle transport modeling method based on irregular triangular networks for complex terrains, addressing the technical challenge of achieving adaptive and efficient MC modeling under high-resolution complex terrain scenarios.
The methodology consists of three key phases: First, high-resolution raster-format terrain elevation data are processed through two-dimensional wavelet transformation to precisely identify abrupt terrain variations and extract significant elevation points. Subsequently, the Delaunay triangulation algorithm is employed to construct TIN-structured terrain models from discrete point sets. Finally, the MCNP code's "arbitrary polyhedron" macrobody definition is leveraged to establish geometric planes, with Boolean operations applied to synthesize intricate geometric entities, thereby realizing rapid automated MC modeling for high-resolution complex terrains.
Results demonstrate that the proposed method accurately reproduces terrain-induced effects on radiation transport, achieving high-fidelity simulations while significantly compressing the number of cells and enhancing computational efficiency.
This methodology represents a novel approach for large-scale radiation field modeling under complex terrain constraints, demonstrating broad applicability to MC particle transport simulations in arbitrary large-scale complex terrain scenarios.
, Available online , doi: 10.11884/HPLPB202638.250187
Abstract:
Airborne Synthetic Aperture Radar (SAR) is vulnerable to continuous wave (CW) interference in complex electromagnetic environments, leading to significant degradation in imaging quality. Its susceptibility to front-door coupling electromagnetic effects is a critical concern. This study aims to systematically investigate the impact patterns and physical mechanisms of single-frequency CW interference on airborne SAR imaging through equivalent injection experiments. It further seeks to establish a robust evaluation method for interference effects. Equivalent injection testing was employed to simulate CW interference susceptibility. The interference effect was evaluated using a composite SAR image quality factor integrating the Pearson Correlation Coefficient (PCC), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Detailed analysis of the radio frequency (RF) front-end response and Analog-to-Digital Converter (ADC) behavior under interference was conducted. Significant interference effects were observed when the interfering frequency fell within the receiver's hardware passband (8.5-9.5 GHz) and the Jammer-to-Signal Ratio reached 15 dB. While the RF front-end exhibited no significant nonlinearity, the interference induced a nonlinear response specifically within the internal Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET) of the ADC sampling chip. This nonlinearity generated additional DC components and harmonics, identified as the fundamental physical cause of characteristic interference stripes and overall SAR image quality degradation. The generation of DC offsets and harmonic distortion within the ADC's MOSFET circuitry is the root physical mechanism behind SAR image degradation under CW interference within the specified band and JSR threshold. This research provides a solid theoretical foundation for designing electromagnetic interference (EMI) countermeasures in airborne SAR systems, thereby enhancing their robustness and imaging capability in challenging complex electromagnetic environments.
Airborne Synthetic Aperture Radar (SAR) is vulnerable to continuous wave (CW) interference in complex electromagnetic environments, leading to significant degradation in imaging quality. Its susceptibility to front-door coupling electromagnetic effects is a critical concern. This study aims to systematically investigate the impact patterns and physical mechanisms of single-frequency CW interference on airborne SAR imaging through equivalent injection experiments. It further seeks to establish a robust evaluation method for interference effects. Equivalent injection testing was employed to simulate CW interference susceptibility. The interference effect was evaluated using a composite SAR image quality factor integrating the Pearson Correlation Coefficient (PCC), Structural Similarity Index (SSIM), and Peak Signal-to-Noise Ratio (PSNR). Detailed analysis of the radio frequency (RF) front-end response and Analog-to-Digital Converter (ADC) behavior under interference was conducted. Significant interference effects were observed when the interfering frequency fell within the receiver's hardware passband (8.5-9.5 GHz) and the Jammer-to-Signal Ratio reached 15 dB. While the RF front-end exhibited no significant nonlinearity, the interference induced a nonlinear response specifically within the internal Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET) of the ADC sampling chip. This nonlinearity generated additional DC components and harmonics, identified as the fundamental physical cause of characteristic interference stripes and overall SAR image quality degradation. The generation of DC offsets and harmonic distortion within the ADC's MOSFET circuitry is the root physical mechanism behind SAR image degradation under CW interference within the specified band and JSR threshold. This research provides a solid theoretical foundation for designing electromagnetic interference (EMI) countermeasures in airborne SAR systems, thereby enhancing their robustness and imaging capability in challenging complex electromagnetic environments.
, Available online , doi: 10.11884/HPLPB202537.250019
Abstract:
Background Purpose Methods Results Conclusions
Fiber laser coherent beam combining technology enables high-power laser output through precise phase control of multiple laser channels. However, factors such as phase control accuracy, optical intensity stability, communication link reliability, and environmental interference can degrade system performance.
This study aims to address the challenge of anomaly detection in phase control for large-scale fiber laser coherent combining by proposing a novel deep learning-based detection method.
First, ten-channel fiber laser coherent combining data were collected, system control processes and beam combining principles were analyzed, and potential anomalies were categorized to generate a simulated dataset. Subsequently, an EMA-Transformer network model incorporating a lightweight Efficient Multi-head Attention (EMA) mechanism was designed. Comparative experiments were conducted to evaluate the model's performance. Finally, an eight-beam fiber laser coherent combining experimental setup was established, and the algorithm was deployed using TensorRT for real-time testing.
The proposed algorithm demonstrated significant improvements, achieving approximately 50% higher accuracy on the validation set and a 2.20% enhancement on the test set compared to ResNet50. In practical testing, the algorithm achieved an inference time of 2.153 ms, meeting real-time requirements for phase control anomaly detection.
The EMA-Transformer model effectively addresses anomaly detection in fiber laser coherent combining systems, offering superior accuracy and real-time performance. This method provides a promising solution for enhancing the stability and reliability of high-power laser systems.

Email alert
RSS
