- i.MX93 LPUARTs have 16-byte RX and TX FIFOs. Take those into use and correct some related register definitions
- There is no reason to loop inside interrupt handler, remove the looping
Signed-off-by: Jukka Laitinen <jukkax@ssrc.tii.ae>
- Remove GPIO (SW) based flow control. It didn't work, and pure HW flow control seems to work fine
- Remove some unneeded ifdefs and change bit-field flags to booleans to clean up the code
Signed-off-by: Jukka Laitinen <jukkax@ssrc.tii.ae>
If the core id needs to be included in the hardware register
calculation, up_cpu_index() should be used instead of this_cpu().
Signed-off-by: chao an <anchao@lixiang.com>
There is no need to invalidate the RX buffer before every transfer.
It is never gets dirty, so it is good to invalidate initially after allocation,
and after each transfer.
Signed-off-by: Jukka Laitinen <jukkax@ssrc.tii.ae>
This commit addresses the issue of unauthorized writes to cntfrq_el0 during boot when not in EL3 mode.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
common/arm64_mpu.c:355:13: error: format '%llX' expects argument of type 'long long unsigned int', but argument 4 has type 'long unsigned int' [-Werror=format=]
355 | _info("MPU-%d, 0x%08llX-0x%08llX SH=%llX AP=%llX XN=%llX\n", i,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
common/arm64_mpu.c:355:29: note: format string is defined here
355 | _info("MPU-%d, 0x%08llX-0x%08llX SH=%llX AP=%llX XN=%llX\n", i,
| ~~~~~^
| |
| long long unsigned int
| %08lX
common/arm64_mpu.c:355:13: error: format '%llX' expects argument of type 'long long unsigned int', but argument 5 has type 'long unsigned int' [-Werror=format=]
355 | _info("MPU-%d, 0x%08llX-0x%08llX SH=%llX AP=%llX XN=%llX\n", i,
| ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
common/arm64_mpu.c:355:38: note: format string is defined here
355 | _info("MPU-%d, 0x%08llX-0x%08llX SH=%llX AP=%llX XN=%llX\n", i,
Signed-off-by: xuxingliang <xuxingliang@xiaomi.com>
1. add IS_ALIGNED() definitions for NuttX;
2. replace all the ALIGN_UP() and ALIGN_DOWN() to use common
align implementation;
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
When using alarm_arch implementation, 64-bit time can be returned. Using unsign long will cause precision loss.
Signed-off-by: yinshengkai <yinshengkai@xiaomi.com>
Summary:
- GICv2 cannot be detected on the golsfish platform
- Golsfish uses version 2.12.0 of qemu with a GICC_IIDR value of 0,
read ICPIDR2 to determine the GIC version
Signed-off-by: wangming9 <wangming9@xiaomi.com>
Summary:
1、cntfrq_el0 is used to store the timer frequency, which may
be different from the CPU frequency.
2、Do not use up_perf interface for SMP.
Signed-off-by: wangming9 <wangming9@xiaomi.com>
Some app with same code runs on different cores in AMP mode,
need the physical core on which the function is called.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Signed-off-by: fangxinyong <fangxinyong@xiaomi.com>
reason:
To remove the "sync pause" and decouple the critical section from the dependency on enabling interrupts,
after that we need to further implement "schedlock + spinlock".
changelist
1 Modify the implementation of critical sections to no longer involve enabling interrupts or handling synchronous pause events.
2 GIC_SMP_CPUCALL attach to pause handler to remove arch interface up_cpu_paused_restore up_cpu_paused_save
3 Completely remove up_cpu_pause, up_cpu_resume, up_cpu_paused, and up_cpu_pausereq
4 change up_cpu_pause_async to up_send_cpu_sgi
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Since FPU is now always saved into the current process stack location
upon exception entry, there is no need to keep fpu_regs (or saved_fpu_regs)
in the TCB.
Correct some of the cache operations:
- EP0 request length was handled incorrectly
- Received data cache invalidate was exceeding the received buffer
- writedtd is also called with no data (EP0 ACK/NACK). Don't touch cache in that case.
Fix trip wire handling to conform with the IMX93 reference manual
Also add DEBUGASSERTS for future to check the validity of pointers and sizes
Co-authored-by: Jukka Laitinen <jukkax@ssrc.tii.ae>
As the handling of sp_el0 was moved from the context switch routine
to exception entry/exit, we must set sp_el0 explicitly when the user
process is first started.
1. refactor the ghs/gcc/clang/armclang toolchain management in CMake
2. unify the cmake toolchain naming style
3. support greenhills build procedure with CMake
4. add protect build for greenhills and gnu toolchain with CMake
Signed-off-by: guoshichao <guoshichao@xiaomi.com>
add arm64 qemu pm compatible for demo pm_idle in not smp & smp usage
demo, chip should based on demo to add more operation in pm_idle_handler
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
This fixes two issues with the tick timer
1) Each tick was longer than the requested period. This is because setting
the compare register was done by first reading the current time, and only
after that setting the compare register. In addition, when handling the
timer interrupts in arch_alarm.c / oneshot_callback, the current_tick is
first read, all the tick handling is done and only after that the next tick
is started. The whole tick processing time was added to the total tick time.
2) When the compare time is not aligned with tick period, and is drifting,
eventually any call to ONESHOT_TICK_CURRENT would either return the current
tick, or the next one, depending on the rounding of division by the
cycle_per_tick. This again leads to oneshot_callback randomly handling
two ticks at a time, which breaks all wdog based timers, causing them to
randomly timeout too early.
The issues are fixed as follows:
Align the compare time register to be evenly divisible by cycle_per_tick.
This will lead arm64_tick_current always to return the currently ongoing tick,
fixing 2). Also calculating the next tick's start from the aligned current
count will fix 1), as there is no time drift in the start cycle.
Signed-off-by: Jukka Laitinen <jukkax@ssrc.tii.ae>
reason:
Currently, if we need to schedule a task to another CPU, we have to completely halt the other CPU,
manipulate the scheduling linked list, and then resume the operation of that CPU. This process is both time-consuming and unnecessary.
During this process, both the current CPU and the target CPU are inevitably subjected to busyloop.
The improved strategy is to simply send a cross-core interrupt to the target CPU.
The current CPU continues to run while the target CPU responds to the interrupt, eliminating the certainty of a busyloop occurring.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
To align with the implementation of ARMv7-A, remove the operation of clearing
interrupts during GIC initialization to avoid losing interrupts during asynchronous startup.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
when a context switch occurs, up_switch_context is executed.
In order to reduce the time taken for context switching,
we inline the up_switch_context function.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
for the citimon stats:
thread 0: thread 1:
enter_critical (t0)
up_switch_context
note suspend thread0 (t1)
thread running
IRQ happen, in ISR:
post thread0
up_switch_context
note resume thread0 (t2)
ISR continue f1
ISR continue f2
...
ISR continue fn
leave_critical (t3)
You will see, the thread 0, critical_section time is:
(t1 - t0) + (t3 - t2)
BUT, this result contains f1 f2 .. fn time spent, it is wrong
to tell user thead0 hold the critical lots of time but actually
not belong to it.
Resolve:
change the nxsched_suspend/resume_scheduler to real hanppends
Signed-off-by: ligd <liguiding1@xiaomi.com>