Otherwise the free holder list will leak, causing either a crash due to
holder->htcb = NULL, or the free holder list becomes (erroneously) empty
even though most of the holder entries are free.
nxevent_tickwait() will remove the node in failure case(EINTR). If the node
has been deleted in the nxevent_post(), NULL pointer reference will
be triggered after semaphore wait.
Signed-off-by: chao an <anchao@lixiang.com>
The holder list can be modified via interrupt so using addrenv_select is
not safe. Access the semaphore by mapping it into kernel virtual memory
instead.
The temporary mappings via addrenv_select() and addrenv_restore() simply
do not work from interrupt, so remove its usage and replace with kmap
which is safe.
Add sem_wait fast operations, use atomic to ensure
atomicity of semcount operations, and do not depend
on critical section.
Test with robot:
before modify:
nxmutex_lock cost: 78 ns
nxmutex_unlock cost: 82 ns
after modify:
nxmutex_lock cost: 28 ns
nxmutex_unlock cost: 14 ns
Signed-off-by: zhangyuan29 <zhangyuan29@xiaomi.com>
reason:
1 There is a similar PR, https://github.com/apache/nuttx/pull/14079,
2 Currently, no one is using recursive locks with write_lock_irqsave/read_lock_irqsave.
3 Nested spinlock is harmful, prone to abuse and leading to a decline in code quality and performance
4 Nested spinlock is also not available in Linux.
5 In our future plans, nested usage of enter_critical_section and spin_lock_irqsave will also be removed.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
spin_lock_wo_note/spin_unlock_wo_note should be called in matching pairs.
This commit fixes the regression from https://github.com/apache/nuttx/pull/13933
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
Since assert may synchronously wait to stop another CPU, potentially
leading to a deadlock, we replace enter_critical_section with a
small spinlock to avoid such a situation.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
This reverts commit befe29801f.
Because a few regressions have been reported and
it likely will take some time to fix them:
* for some configurations, semaphore can be used on the special
memory region, where atomic access is not available.
cf. https://github.com/apache/nuttx/pull/14625
* include/nuttx/lib/stdatomic.h is not compatible with
the C11 semantics, which the change in question relies on.
cf. https://github.com/apache/nuttx/pull/14755
The performance penalty in SMP mode is too big for taking the big kernel
lock simply to bump the address environment reference counter; fix this
by using the compiler provided atomic macros.
reason:
Since pthread_cond_broadcast is already protected by a mutex,
even if sem_post causes a context switch, it will not affect the count of wait_count.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Don't compile dump_assert_info logic if CONFIG_DEBUG_ALERT=n
With _alert() disabled this logic does nothing, but the compiler
is not smart enough to optimize this code.
on minimal stm32f3 configuration it saves 220B of flash.
reason:
new implementation does not requires the use of enter_critical_section,
so the source code needs to be moved to user space
This reverts commit d189a86a35.
reason:
When entering an exception or interrupt, there are two sets of registers:
one is the "running regs", which we need to save,
and the other is the "ready to running regs", which we may soon use.
For consistency in logic, we can always store the "running regs" in the regs field of g_running_tasks,
otherwise it may lead to errors in the storage location of the "running regs."
When we need to access the "running regs," we should uniformly retrieve them from the regs field of g_running_tasks.
As the next step, we will rename the set_current_regs/up_current_regs functions
for each architecture to more appropriate names, solely for the purpose of identifying interrupts.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
We decouple semcount from business logic by using an independent counting variable,
which allows us to remove critical sections in many cases.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
when CONFIG_CLOCK_TIMEKEEPING=y, compiler error may report
In file included from /home/hujun5/downloads1/vela_sim/nuttx/include/nuttx/sched.h:42,
from /home/hujun5/downloads1/vela_sim/nuttx/include/nuttx/arch.h:89,
from boardctl.c:33:
/home/hujun5/downloads1/vela_sim/nuttx/include/nuttx/irq.h:261:12: error: conflicting types for 'enter_critical_section'; have 'irqstate_t(void)' {aka 'long unsigned int(void)'}
261 | irqstate_t enter_critical_section(void) noinstrument_function;
| ^~~~~~~~~~~~~~~~~~~~~~
In file included from /home/hujun5/downloads1/vela_sim/nuttx/include/nuttx/wqueue.h:37,
from /home/hujun5/downloads1/vela_sim/nuttx/include/nuttx/addrenv.h:39,
from /home/hujun5/downloads1/vela_sim/nuttx/include/nuttx/sched.h:40:
/home/hujun5/downloads1/vela_sim/nuttx/include/nuttx/wdog.h:267:11: note: previous implicit declaration of 'enter_critical_section' with type 'int()'
267 | flags = enter_critical_section();
| ^~~~~~~~~~~~~~~~~~~~~~
hujun5@hujun5-OptiPlex-7070:~/downloads1/vela_sim/nuttx$ make -j12
sched/sched_processtimer.c: In function 'nxsched_process_timer':
sched/sched_processtimer.c:178:3: error: implicit declaration of function 'clock_update_wall_time' [-Werror=implicit-function-declaration]
178 | clock_update_wall_time();
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
some arch do not support issuing interrupts to the local CPU.
This commit fixes the regression from https://github.com/apache/nuttx/pull/14663
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
The old implementation of the SMP call, even when using the "no wait" parameter,
could still result in waiting, if invoking it within a critical section
may lead to deadlocks. Therefore, in order to implement a truly asynchronous SMP
call strategy, we have added nxsched_smp_call_async.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Be consistent with the datatype used for the 'key'. As the API in
in the .h file uses int, I chose to replace uint32_t with int
everywhere.
This change ensures that work_notifier_setup() will not return a
negative value indicating error to the caller, when in fact the
notifier was correctly setup. This would happen when
work_notifier_setup() had been called 0x8000000 times, and that
many keys allocated.
reason:
In smp, It's possible that in a scenario where CONFIG_SMP_DEFAULT_CPUSET is set to 1,
when taskA is created with a relatively low priority,
it gets added to the g_readytorun queue with an affinity of 0x1.
Meanwhile, CPUs 1~n are in an idle state.
Subsequently, when we attempt to change the affinity property of taskA using nxsched_set_affinity,
the scheduling mechanism might not be triggered due to the lack of a proper condition check.
This can result in taskA remaining unscheduled and therefore unable to run.
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
If we need to handle a tcb that is running on another CPU,
we need to process it through the smpcall method.
This commit fixes the regression from https://github.com/apache/nuttx/pull/13863
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Don't compile dump_task logic if CONFIG_DEBUG_ALERT=n.
With _alert() disabled this logic does nothing, but the compiler
is not smart enough to optimize this code.
on minimal stm32f3 configuration it saves 396B of flash.
reason:
We decouple semcount from business logic by using an independent counting variable,
which allows us to remove critical sections in many cases.
Signed-off-by: hujun5 <hujun5@xiaomi.com>