UHID - User-space I/O driver support for HID subsystem
========================================================
-The HID subsystem needs two kinds of drivers. In this document we call them:
+UHID allows user-space to implement HID transport drivers. Please see
+hid-transport.txt for an introduction into HID transport drivers. This document
+relies heavily on the definitions declared there.
- 1. The "HID I/O Driver" is the driver that performs raw data I/O to the
- low-level device. Internally, they register an hid_ll_driver structure with
- the HID core. They perform device setup, read raw data from the device and
- push it into the HID subsystem and they provide a callback so the HID
- subsystem can send data to the device.
-
- 2. The "HID Device Driver" is the driver that parses HID reports and reacts on
- them. There are generic drivers like "generic-usb" and "generic-bluetooth"
- which adhere to the HID specification and provide the standardizes features.
- But there may be special drivers and quirks for each non-standard device out
- there. Internally, they use the hid_driver structure.
-
-Historically, the USB stack was the first subsystem to provide an HID I/O
-Driver. However, other standards like Bluetooth have adopted the HID specs and
-may provide HID I/O Drivers, too. The UHID driver allows to implement HID I/O
-Drivers in user-space and feed the data into the kernel HID-subsystem.
-
-This allows user-space to operate on the same level as USB-HID, Bluetooth-HID
-and similar. It does not provide a way to write HID Device Drivers, though. Use
-hidraw for this purpose.
+With UHID, a user-space transport driver can create kernel hid-devices for each
+device connected to the user-space controlled bus. The UHID API defines the I/O
+events provided from the kernel to user-space and vice versa.
There is an example user-space application in ./samples/uhid/uhid-example.c
struct uhid_event {
__u32 type;
union {
- struct uhid_create_req create;
- struct uhid_data_req data;
+ struct uhid_create2_req create2;
+ struct uhid_output_req output;
+ struct uhid_input2_req input2;
...
} u;
};
only a single event can be sent per read() or write(). Pending data is ignored.
If you want to handle multiple events in a single syscall, then use vectored
I/O with readv()/writev().
+The "type" field defines the payload. For each type, there is a
+payload-structure available in the union "u" (except for empty payloads). This
+payload contains management and/or device data.
-The first thing you should do is sending an UHID_CREATE event. This will
+The first thing you should do is sending an UHID_CREATE2 event. This will
register the device. UHID will respond with an UHID_START event. You can now
start sending data to and reading data from UHID. However, unless UHID sends the
UHID_OPEN event, the internally attached HID Device Driver has no user attached.
You may decide to ignore UHID_OPEN/UHID_CLOSE, though. I/O is allowed even
though the device may have no users.
-If you want to send data to the HID subsystem, you send an HID_INPUT event with
-your raw data payload. If the kernel wants to send data to the device, you will
-read an UHID_OUTPUT or UHID_OUTPUT_EV event.
+If you want to send data on the interrupt channel to the HID subsystem, you send
+an HID_INPUT2 event with your raw data payload. If the kernel wants to send data
+on the interrupt channel to the device, you will read an UHID_OUTPUT event.
+Data requests on the control channel are currently limited to GET_REPORT and
+SET_REPORT (no other data reports on the control channel are defined so far).
+Those requests are always synchronous. That means, the kernel sends
+UHID_GET_REPORT and UHID_SET_REPORT events and requires you to forward them to
+the device on the control channel. Once the device responds, you must forward
+the response via UHID_GET_REPORT_REPLY and UHID_SET_REPORT_REPLY to the kernel.
+The kernel blocks internal driver-execution during such round-trips (times out
+after a hard-coded period).
If your device disconnects, you should send an UHID_DESTROY event. This will
-unregister the device. You can now send UHID_CREATE again to register a new
+unregister the device. You can now send UHID_CREATE2 again to register a new
device.
If you close() the fd, the device is automatically unregistered and destroyed
internally.
write()
-------
write() allows you to modify the state of the device and feed input data into
-the kernel. The following types are supported: UHID_CREATE, UHID_DESTROY and
-UHID_INPUT. The kernel will parse the event immediately and if the event ID is
+the kernel. The kernel will parse the event immediately and if the event ID is
not supported, it will return -EOPNOTSUPP. If the payload is invalid, then
-EINVAL is returned, otherwise, the amount of data that was read is returned and
-the request was handled successfully.
+the request was handled successfully. O_NONBLOCK does not affect write() as
+writes are always handled immediately in a non-blocking fashion. Future requests
+might make use of O_NONBLOCK, though.
- UHID_CREATE:
+ UHID_CREATE2:
This creates the internal HID device. No I/O is possible until you send this
- event to the kernel. The payload is of type struct uhid_create_req and
+ event to the kernel. The payload is of type struct uhid_create2_req and
contains information about your device. You can start I/O now.
- UHID_CREATE2:
- Same as UHID_CREATE, but the HID report descriptor data (rd_data) is an array
- inside struct uhid_create2_req, instead of a pointer to a separate array.
- Enables use from languages that don't support pointers, e.g. Python.
-
UHID_DESTROY:
This destroys the internal HID device. No further I/O will be accepted. There
may still be pending messages that you can receive with read() but no further
UHID_INPUT events can be sent to the kernel.
- You can create a new device by sending UHID_CREATE again. There is no need to
+ You can create a new device by sending UHID_CREATE2 again. There is no need to
reopen the character device.
- UHID_INPUT:
- You must send UHID_CREATE before sending input to the kernel! This event
- contains a data-payload. This is the raw data that you read from your device.
- The kernel will parse the HID reports and react on it.
-
UHID_INPUT2:
- Same as UHID_INPUT, but the data array is the last field of uhid_input2_req.
- Enables userspace to write only the required bytes to kernel (ev.type +
- ev.u.input2.size + the part of the data array that matters), instead of
- the entire struct uhid_input2_req.
-
- UHID_FEATURE_ANSWER:
- If you receive a UHID_FEATURE request you must answer with this request. You
- must copy the "id" field from the request into the answer. Set the "err" field
- to 0 if no error occurred or to EIO if an I/O error occurred.
+ You must send UHID_CREATE2 before sending input to the kernel! This event
+ contains a data-payload. This is the raw data that you read from your device
+ on the interrupt channel. The kernel will parse the HID reports.
+
+ UHID_GET_REPORT_REPLY:
+ If you receive a UHID_GET_REPORT request you must answer with this request.
+ You must copy the "id" field from the request into the answer. Set the "err"
+ field to 0 if no error occurred or to EIO if an I/O error occurred.
If "err" is 0 then you should fill the buffer of the answer with the results
- of the feature request and set "size" correspondingly.
+ of the GET_REPORT request and set "size" correspondingly.
+
+ UHID_SET_REPORT_REPLY:
+ This is the SET_REPORT equivalent of UHID_GET_REPORT_REPLY. Unlike GET_REPORT,
+ SET_REPORT never returns a data buffer, therefore, it's sufficient to set the
+ "id" and "err" fields correctly.
read()
------
-read() will return a queued output report. These output reports can be of type
-UHID_START, UHID_STOP, UHID_OPEN, UHID_CLOSE, UHID_OUTPUT or UHID_OUTPUT_EV. No
-reaction is required to any of them but you should handle them according to your
-needs. Only UHID_OUTPUT and UHID_OUTPUT_EV have payloads.
+read() will return a queued output report. No reaction is required to any of
+them but you should handle them according to your needs.
UHID_START:
This is sent when the HID device is started. Consider this as an answer to
- UHID_CREATE. This is always the first event that is sent.
+ UHID_CREATE2. This is always the first event that is sent. Note that this
+ event might not be available immediately after write(UHID_CREATE2) returns.
+ Device drivers might required delayed setups.
+ This event contains a payload of type uhid_start_req. The "dev_flags" field
+ describes special behaviors of a device. The following flags are defined:
+ UHID_DEV_NUMBERED_FEATURE_REPORTS:
+ UHID_DEV_NUMBERED_OUTPUT_REPORTS:
+ UHID_DEV_NUMBERED_INPUT_REPORTS:
+ Each of these flags defines whether a given report-type uses numbered
+ reports. If numbered reports are used for a type, all messages from
+ the kernel already have the report-number as prefix. Otherwise, no
+ prefix is added by the kernel.
+ For messages sent by user-space to the kernel, you must adjust the
+ prefixes according to these flags.
UHID_STOP:
This is sent when the HID device is stopped. Consider this as an answer to
UHID_DESTROY.
- If the kernel HID device driver closes the device manually (that is, you
- didn't send UHID_DESTROY) then you should consider this device closed and send
- an UHID_DESTROY event. You may want to reregister your device, though. This is
- always the last message that is sent to you unless you reopen the device with
- UHID_CREATE.
+ If you didn't destroy your device via UHID_DESTROY, but the kernel sends an
+ UHID_STOP event, this should usually be ignored. It means that the kernel
+ reloaded/changed the device driver loaded on your HID device (or some other
+ maintenance actions happened).
+ You can usually ignored any UHID_STOP events safely.
UHID_OPEN:
This is sent when the HID device is opened. That is, the data that the HID
device provides is read by some other process. You may ignore this event but
it is useful for power-management. As long as you haven't received this event
there is actually no other process that reads your data so there is no need to
- send UHID_INPUT events to the kernel.
+ send UHID_INPUT2 events to the kernel.
UHID_CLOSE:
This is sent when there are no more processes which read the HID data. It is
UHID_OUTPUT:
This is sent if the HID device driver wants to send raw data to the I/O
- device. You should read the payload and forward it to the device. The payload
- is of type "struct uhid_data_req".
+ device on the interrupt channel. You should read the payload and forward it to
+ the device. The payload is of type "struct uhid_data_req".
This may be received even though you haven't received UHID_OPEN, yet.
- UHID_OUTPUT_EV (obsolete):
- Same as UHID_OUTPUT but this contains a "struct input_event" as payload. This
- is called for force-feedback, LED or similar events which are received through
- an input device by the HID subsystem. You should convert this into raw reports
- and send them to your device similar to events of type UHID_OUTPUT.
- This is no longer sent by newer kernels. Instead, HID core converts it into a
- raw output report and sends it via UHID_OUTPUT.
-
- UHID_FEATURE:
- This event is sent if the kernel driver wants to perform a feature request as
- described in the HID specs. The report-type and report-number are available in
- the payload.
- The kernel serializes feature requests so there will never be two in parallel.
- However, if you fail to respond with a UHID_FEATURE_ANSWER in a time-span of 5
- seconds, then the requests will be dropped and a new one might be sent.
- Therefore, the payload also contains an "id" field that identifies every
- request.
-
-Document by:
- David Herrmann <dh.herrmann@googlemail.com>
+ UHID_GET_REPORT:
+ This event is sent if the kernel driver wants to perform a GET_REPORT request
+ on the control channeld as described in the HID specs. The report-type and
+ report-number are available in the payload.
+ The kernel serializes GET_REPORT requests so there will never be two in
+ parallel. However, if you fail to respond with a UHID_GET_REPORT_REPLY, the
+ request might silently time out.
+ Once you read a GET_REPORT request, you shall forward it to the hid device and
+ remember the "id" field in the payload. Once your hid device responds to the
+ GET_REPORT (or if it fails), you must send a UHID_GET_REPORT_REPLY to the
+ kernel with the exact same "id" as in the request. If the request already
+ timed out, the kernel will ignore the response silently. The "id" field is
+ never re-used, so conflicts cannot happen.
+
+ UHID_SET_REPORT:
+ This is the SET_REPORT equivalent of UHID_GET_REPORT. On receipt, you shall
+ send a SET_REPORT request to your hid device. Once it replies, you must tell
+ the kernel about it via UHID_SET_REPORT_REPLY.
+ The same restrictions as for UHID_GET_REPORT apply.
+
+----------------------------------------------------
+Written 2012, David Herrmann <dh.herrmann@gmail.com>
-------------------
this_cpu operations are a way of optimizing access to per cpu
-variables associated with the *currently* executing processor through
-the use of segment registers (or a dedicated register where the cpu
-permanently stored the beginning of the per cpu area for a specific
-processor).
+variables associated with the *currently* executing processor. This is
+done through the use of segment registers (or a dedicated register where
+the cpu permanently stored the beginning of the per cpu area for a
+specific processor).
-The this_cpu operations add a per cpu variable offset to the processor
-specific percpu base and encode that operation in the instruction
+this_cpu operations add a per cpu variable offset to the processor
+specific per cpu base and encode that operation in the instruction
operating on the per cpu variable.
-This means there are no atomicity issues between the calculation of
+This means that there are no atomicity issues between the calculation of
the offset and the operation on the data. Therefore it is not
-necessary to disable preempt or interrupts to ensure that the
+necessary to disable preemption or interrupts to ensure that the
processor is not changed between the calculation of the address and
the operation on the data.
Read-modify-write operations are of particular interest. Frequently
processors have special lower latency instructions that can operate
-without the typical synchronization overhead but still provide some
-sort of relaxed atomicity guarantee. The x86 for example can execute
-RMV (Read Modify Write) instructions like inc/dec/cmpxchg without the
+without the typical synchronization overhead, but still provide some
+sort of relaxed atomicity guarantees. The x86, for example, can execute
+RMW (Read Modify Write) instructions like inc/dec/cmpxchg without the
lock prefix and the associated latency penalty.
Access to the variable without the lock prefix is not synchronized but
processor should be accessing that variable and therefore there are no
concurrency issues with other processors in the system.
+Please note that accesses by remote processors to a per cpu area are
+exceptional situations and may impact performance and/or correctness
+(remote write operations) of local RMW operations via this_cpu_*.
+
+The main use of the this_cpu operations has been to optimize counter
+operations.
+
+The following this_cpu() operations with implied preemption protection
+are defined. These operations can be used without worrying about
+preemption and interrupts.
+
+ this_cpu_add()
+ this_cpu_read(pcp)
+ this_cpu_write(pcp, val)
+ this_cpu_add(pcp, val)
+ this_cpu_and(pcp, val)
+ this_cpu_or(pcp, val)
+ this_cpu_add_return(pcp, val)
+ this_cpu_xchg(pcp, nval)
+ this_cpu_cmpxchg(pcp, oval, nval)
+ this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
+ this_cpu_sub(pcp, val)
+ this_cpu_inc(pcp)
+ this_cpu_dec(pcp)
+ this_cpu_sub_return(pcp, val)
+ this_cpu_inc_return(pcp)
+ this_cpu_dec_return(pcp)
+
+
+Inner working of this_cpu operations
+------------------------------------
+
On x86 the fs: or the gs: segment registers contain the base of the
per cpu area. It is then possible to simply use the segment override
to relocate a per cpu relative address to the proper per cpu area for
mov ax, gs:[x]
instead of a sequence of calculation of the address and then a fetch
-from that address which occurs with the percpu operations. Before
+from that address which occurs with the per cpu operations. Before
this_cpu_ops such sequence also required preempt disable/enable to
prevent the kernel from moving the thread to a different processor
while the calculation is performed.
-The main use of the this_cpu operations has been to optimize counter
-operations.
+Consider the following this_cpu operation:
this_cpu_inc(x)
-results in the following single instruction (no lock prefix!)
+The above results in the following single instruction (no lock prefix!)
inc gs:[x]
instead of the following operations required if there is no segment
-register.
+register:
int *y;
int cpu;
(*y)++;
put_cpu();
-Note that these operations can only be used on percpu data that is
+Note that these operations can only be used on per cpu data that is
reserved for a specific processor. Without disabling preemption in the
surrounding code this_cpu_inc() will only guarantee that one of the
-percpu counters is correctly incremented. However, there is no
+per cpu counters is correctly incremented. However, there is no
guarantee that the OS will not move the process directly before or
after the this_cpu instruction is executed. In general this means that
the value of the individual counters for each processor are
Per cpu variables are used for performance reasons. Bouncing cache
lines can be avoided if multiple processors concurrently go through
the same code paths. Since each processor has its own per cpu
-variables no concurrent cacheline updates take place. The price that
+variables no concurrent cache line updates take place. The price that
has to be paid for this optimization is the need to add up the per cpu
-counters when the value of the counter is needed.
+counters when the value of a counter is needed.
Special operations:
of the per cpu variable that belongs to the currently executing
processor. this_cpu_ptr avoids multiple steps that the common
get_cpu/put_cpu sequence requires. No processor number is
-available. Instead the offset of the local per cpu area is simply
-added to the percpu offset.
+available. Instead, the offset of the local per cpu area is simply
+added to the per cpu offset.
+Note that this operation is usually used in a code segment when
+preemption has been disabled. The pointer is then used to
+access local per cpu data in a critical section. When preemption
+is re-enabled this pointer is usually no longer useful since it may
+no longer point to per cpu data of the current processor.
Per cpu variables and offsets
-----------------------------
-Per cpu variables have *offsets* to the beginning of the percpu
+Per cpu variables have *offsets* to the beginning of the per cpu
area. They do not have addresses although they look like that in the
code. Offsets cannot be directly dereferenced. The offset must be
-added to a base pointer of a percpu area of a processor in order to
+added to a base pointer of a per cpu area of a processor in order to
form a valid address.
Therefore the use of x or &x outside of the context of per cpu
operations is invalid and will generally be treated like a NULL
pointer dereference.
-In the context of per cpu operations
+ DEFINE_PER_CPU(int, x);
- x is a per cpu variable. Most this_cpu operations take a cpu
- variable.
+In the context of per cpu operations the above implies that x is a per
+cpu variable. Most this_cpu operations take a cpu variable.
- &x is the *offset* a per cpu variable. this_cpu_ptr() takes
- the offset of a per cpu variable which makes this look a bit
- strange.
+ int __percpu *p = &x;
+&x and hence p is the *offset* of a per cpu variable. this_cpu_ptr()
+takes the offset of a per cpu variable which makes this look a bit
+strange.
Operations on a field of a per cpu structure
struct s __percpu *ps = &p;
- z = this_cpu_dec(ps->m);
+ this_cpu_dec(ps->m);
z = this_cpu_inc_return(ps->n);
Variants of this_cpu ops
-------------------------
-this_cpu ops are interrupt safe. Some architecture do not support
+this_cpu ops are interrupt safe. Some architectures do not support
these per cpu local operations. In that case the operation must be
replaced by code that disables interrupts, then does the operations
-that are guaranteed to be atomic and then reenable interrupts. Doing
+that are guaranteed to be atomic and then re-enable interrupts. Doing
so is expensive. If there are other reasons why the scheduler cannot
change the processor we are executing on then there is no reason to
-disable interrupts. For that purpose the __this_cpu operations are
-provided. For example.
-
- __this_cpu_inc(x);
-
-Will increment x and will not fallback to code that disables
+disable interrupts. For that purpose the following __this_cpu operations
+are provided.
+
+These operations have no guarantee against concurrent interrupts or
+preemption. If a per cpu variable is not used in an interrupt context
+and the scheduler cannot preempt, then they are safe. If any interrupts
+still occur while an operation is in progress and if the interrupt too
+modifies the variable, then RMW actions can not be guaranteed to be
+safe.
+
+ __this_cpu_add()
+ __this_cpu_read(pcp)
+ __this_cpu_write(pcp, val)
+ __this_cpu_add(pcp, val)
+ __this_cpu_and(pcp, val)
+ __this_cpu_or(pcp, val)
+ __this_cpu_add_return(pcp, val)
+ __this_cpu_xchg(pcp, nval)
+ __this_cpu_cmpxchg(pcp, oval, nval)
+ __this_cpu_cmpxchg_double(pcp1, pcp2, oval1, oval2, nval1, nval2)
+ __this_cpu_sub(pcp, val)
+ __this_cpu_inc(pcp)
+ __this_cpu_dec(pcp)
+ __this_cpu_sub_return(pcp, val)
+ __this_cpu_inc_return(pcp)
+ __this_cpu_dec_return(pcp)
+
+
+Will increment x and will not fall-back to code that disables
interrupts on platforms that cannot accomplish atomicity through
address relocation and a Read-Modify-Write operation in the same
instruction.
-
&this_cpu_ptr(pp)->n vs this_cpu_ptr(&pp->n)
--------------------------------------------
The first operation takes the offset and forms an address and then
-adds the offset of the n field.
+adds the offset of the n field. This may result in two add
+instructions emitted by the compiler.
The second one first adds the two offsets and then does the
relocation. IMHO the second form looks cleaner and has an easier time
this_cpu_read() and friends are used.
-Christoph Lameter, April 3rd, 2013
+Remote access to per cpu data
+------------------------------
+
+Per cpu data structures are designed to be used by one cpu exclusively.
+If you use the variables as intended, this_cpu_ops() are guaranteed to
+be "atomic" as no other CPU has access to these data structures.
+
+There are special cases where you might need to access per cpu data
+structures remotely. It is usually safe to do a remote read access
+and that is frequently done to summarize counters. Remote write access
+something which could be problematic because this_cpu ops do not
+have lock semantics. A remote write may interfere with a this_cpu
+RMW operation.
+
+Remote write accesses to percpu data structures are highly discouraged
+unless absolutely necessary. Please consider using an IPI to wake up
+the remote CPU and perform the update to its per cpu area.
+
+To access per-cpu data structure remotely, typically the per_cpu_ptr()
+function is used:
+
+
+ DEFINE_PER_CPU(struct data, datap);
+
+ struct data *p = per_cpu_ptr(&datap, cpu);
+
+This makes it explicit that we are getting ready to access a percpu
+area remotely.
+
+You can also do the following to convert the datap offset to an address
+
+ struct data *p = this_cpu_ptr(&datap);
+
+but, passing of pointers calculated via this_cpu_ptr to other cpus is
+unusual and should be avoided.
+
+Remote access are typically only for reading the status of another cpus
+per cpu data. Write accesses can cause unique problems due to the
+relaxed synchronization requirements for this_cpu operations.
+
+One example that illustrates some concerns with write operations is
+the following scenario that occurs because two per cpu variables
+share a cache-line but the relaxed synchronization is applied to
+only one process updating the cache-line.
+
+Consider the following example
+
+
+ struct test {
+ atomic_t a;
+ int b;
+ };
+
+ DEFINE_PER_CPU(struct test, onecacheline);
+
+There is some concern about what would happen if the field 'a' is updated
+remotely from one processor and the local processor would use this_cpu ops
+to update field b. Care should be taken that such simultaneous accesses to
+data within the same cache line are avoided. Also costly synchronization
+may be necessary. IPIs are generally recommended in such scenarios instead
+of a remote write to the per cpu area of another processor.
+
+Even in cases where the remote writes are rare, please bear in
+mind that a remote write will evict the cache line from the processor
+that most likely will access it. If the processor wakes up and finds a
+missing local cache line of a per cpu area, its performance and hence
+the wake up times will be affected.
+
+Christoph Lameter, August 4th, 2014
+Pranith Kumar, Aug 2nd, 2014
profiles. If you believe that individual invalidations being
called too often, you can lower the tunable:
- /sys/debug/kernel/x86/tlb_single_page_flush_ceiling
+ /sys/kernel/debug/x86/tlb_single_page_flush_ceiling
This will cause us to do the global flush for more cases.
Lowering it to 0 will disable the use of the individual flushes.
ARM/Rockchip SoC support
M: Heiko Stuebner <heiko@sntech.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
+L: linux-rockchip@lists.infradead.org
S: Maintained
F: arch/arm/mach-rockchip/
F: drivers/*/*rockchip*
F: Documentation/filesystems/befs.txt
F: fs/befs/
+BECKHOFF CX5020 ETHERCAT MASTER DRIVER
+M: Dariusz Marcinkiewicz <reksio@newterm.pl>
+L: netdev@vger.kernel.org
+S: Maintained
+F: drivers/net/ethernet/ec_bhf.c
+
BFS FILE SYSTEM
M: "Tigran A. Aivazian" <tigran@aivazian.fsnet.co.uk>
S: Maintained
F: drivers/scsi/bnx2i/
BROADCOM KONA GPIO DRIVER
-M: Markus Mayer <markus.mayer@linaro.org>
+M: Ray Jui <rjui@broadcom.com>
L: bcm-kernel-feedback-list@broadcom.com
S: Supported
F: drivers/gpio/gpio-bcm-kona.c
F: include/uapi/drm/tegra_drm.h
F: Documentation/devicetree/bindings/gpu/nvidia,tegra20-host1x.txt
+DRM DRIVERS FOR RENESAS
+M: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
+L: dri-devel@lists.freedesktop.org
+L: linux-sh@vger.kernel.org
+T: git git://people.freedesktop.org/~airlied/linux
+S: Supported
+F: drivers/gpu/drm/rcar-du/
+F: drivers/gpu/drm/shmobile/
+F: include/linux/platform_data/rcar-du.h
+F: include/linux/platform_data/shmob_drm.h
+
DSBR100 USB FM RADIO DRIVER
M: Alexey Klimov <klimov.linux@gmail.com>
L: linux-media@vger.kernel.org
S: Maintained
F: drivers/media/radio/radio-mr800.c
+MRF24J40 IEEE 802.15.4 RADIO DRIVER
+M: Alan Ott <alan@signal11.us>
+L: linux-wpan@vger.kernel.org
+S: Maintained
+F: drivers/net/ieee802154/mrf24j40.c
+
MSI LAPTOP SUPPORT
M: "Lee, Chun-Yi" <jlee@suse.com>
L: platform-driver-x86@vger.kernel.org
VERSION = 3
PATCHLEVEL = 17
SUBLEVEL = 0
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc2
NAME = Shuffling Zombie Juror
# *DOCUMENTATION*
i2c@13860000 {
pinctrl-0 = <&i2c0_bus>;
pinctrl-names = "default";
+ samsung,i2c-sda-delay = <100>;
+ samsung,i2c-max-bus-freq = <400000>;
status = "okay";
usb3503: usb3503@08 {
max77686: pmic@09 {
compatible = "maxim,max77686";
+ interrupt-parent = <&gpx3>;
+ interrupts = <2 0>;
+ pinctrl-names = "default";
+ pinctrl-0 = <&max77686_irq>;
reg = <0x09>;
#clock-cells = <1>;
samsung,pins = "gpx1-3";
samsung,pin-pud = <0>;
};
+
+ max77686_irq: max77686-irq {
+ samsung,pins = "gpx3-2";
+ samsung,pin-function = <0>;
+ samsung,pin-pud = <0>;
+ samsung,pin-drv = <0>;
+ };
};
compatible = "fsl,imx53-vpu";
reg = <0x63ff4000 0x1000>;
interrupts = <9>;
- clocks = <&clks IMX5_CLK_VPU_GATE>,
+ clocks = <&clks IMX5_CLK_VPU_REFERENCE_GATE>,
<&clks IMX5_CLK_VPU_GATE>;
clock-names = "per", "ahb";
resets = <&src 1>;
pinctrl-names = "default";
pinctrl-0 = <&pinctrl_enet>;
phy-mode = "rgmii";
- phy-reset-gpios = <&gpio3 23 0>;
+ phy-reset-gpios = <&gpio1 25 0>;
phy-supply = <&vgen2_1v2_eth>;
status = "okay";
};
MX6QDL_PAD_ENET_REF_CLK__ENET_TX_CLK 0x1b0b0
MX6QDL_PAD_ENET_MDIO__ENET_MDIO 0x1b0b0
MX6QDL_PAD_ENET_MDC__ENET_MDC 0x1b0b0
+ MX6QDL_PAD_ENET_CRS_DV__GPIO1_IO25 0x1b0b0
MX6QDL_PAD_GPIO_16__ENET_REF_CLK 0x4001b0a8
>;
};
#define MX6SX_PAD_GPIO1_IO07__USDHC2_WP 0x0030 0x0378 0x0870 0x1 0x1
#define MX6SX_PAD_GPIO1_IO07__ENET2_MDIO 0x0030 0x0378 0x0770 0x2 0x0
#define MX6SX_PAD_GPIO1_IO07__AUDMUX_MCLK 0x0030 0x0378 0x0000 0x3 0x0
-#define MX6SX_PAD_GPIO1_IO07__UART1_CTS_B 0x0030 0x0378 0x082C 0x4 0x1
+#define MX6SX_PAD_GPIO1_IO07__UART1_CTS_B 0x0030 0x0378 0x0000 0x4 0x0
#define MX6SX_PAD_GPIO1_IO07__GPIO1_IO_7 0x0030 0x0378 0x0000 0x5 0x0
#define MX6SX_PAD_GPIO1_IO07__SRC_EARLY_RESET 0x0030 0x0378 0x0000 0x6 0x0
#define MX6SX_PAD_GPIO1_IO07__DCIC2_OUT 0x0030 0x0378 0x0000 0x7 0x0
#define MX6SX_PAD_GPIO1_IO09__WDOG2_WDOG_B 0x0038 0x0380 0x0000 0x1 0x0
#define MX6SX_PAD_GPIO1_IO09__SDMA_EXT_EVENT_1 0x0038 0x0380 0x0820 0x2 0x0
#define MX6SX_PAD_GPIO1_IO09__CCM_OUT0 0x0038 0x0380 0x0000 0x3 0x0
-#define MX6SX_PAD_GPIO1_IO09__UART2_CTS_B 0x0038 0x0380 0x0834 0x4 0x1
+#define MX6SX_PAD_GPIO1_IO09__UART2_CTS_B 0x0038 0x0380 0x0000 0x4 0x0
#define MX6SX_PAD_GPIO1_IO09__GPIO1_IO_9 0x0038 0x0380 0x0000 0x5 0x0
#define MX6SX_PAD_GPIO1_IO09__SRC_INT_BOOT 0x0038 0x0380 0x0000 0x6 0x0
#define MX6SX_PAD_GPIO1_IO09__OBSERVE_MUX_OUT_4 0x0038 0x0380 0x0000 0x7 0x0
#define MX6SX_PAD_CSI_DATA07__ESAI_TX3_RX2 0x0068 0x03B0 0x079C 0x1 0x1
#define MX6SX_PAD_CSI_DATA07__I2C4_SDA 0x0068 0x03B0 0x07C4 0x2 0x2
#define MX6SX_PAD_CSI_DATA07__KPP_ROW_7 0x0068 0x03B0 0x07DC 0x3 0x0
-#define MX6SX_PAD_CSI_DATA07__UART6_CTS_B 0x0068 0x03B0 0x0854 0x4 0x1
+#define MX6SX_PAD_CSI_DATA07__UART6_CTS_B 0x0068 0x03B0 0x0000 0x4 0x0
#define MX6SX_PAD_CSI_DATA07__GPIO1_IO_21 0x0068 0x03B0 0x0000 0x5 0x0
#define MX6SX_PAD_CSI_DATA07__WEIM_DATA_16 0x0068 0x03B0 0x0000 0x6 0x0
#define MX6SX_PAD_CSI_DATA07__DCIC1_OUT 0x0068 0x03B0 0x0000 0x7 0x0
#define MX6SX_PAD_CSI_VSYNC__CSI1_VSYNC 0x0078 0x03C0 0x0708 0x0 0x0
#define MX6SX_PAD_CSI_VSYNC__ESAI_TX5_RX0 0x0078 0x03C0 0x07A4 0x1 0x1
#define MX6SX_PAD_CSI_VSYNC__AUDMUX_AUD6_RXD 0x0078 0x03C0 0x0674 0x2 0x1
-#define MX6SX_PAD_CSI_VSYNC__UART4_CTS_B 0x0078 0x03C0 0x0844 0x3 0x3
+#define MX6SX_PAD_CSI_VSYNC__UART4_CTS_B 0x0078 0x03C0 0x0000 0x3 0x0
#define MX6SX_PAD_CSI_VSYNC__MQS_RIGHT 0x0078 0x03C0 0x0000 0x4 0x0
#define MX6SX_PAD_CSI_VSYNC__GPIO1_IO_25 0x0078 0x03C0 0x0000 0x5 0x0
#define MX6SX_PAD_CSI_VSYNC__WEIM_DATA_24 0x0078 0x03C0 0x0000 0x6 0x0
#define MX6SX_PAD_ENET2_TX_CLK__ENET2_TX_CLK 0x00A0 0x03E8 0x0000 0x0 0x0
#define MX6SX_PAD_ENET2_TX_CLK__ENET2_REF_CLK2 0x00A0 0x03E8 0x076C 0x1 0x1
#define MX6SX_PAD_ENET2_TX_CLK__I2C3_SDA 0x00A0 0x03E8 0x07BC 0x2 0x1
-#define MX6SX_PAD_ENET2_TX_CLK__UART1_CTS_B 0x00A0 0x03E8 0x082C 0x3 0x3
+#define MX6SX_PAD_ENET2_TX_CLK__UART1_CTS_B 0x00A0 0x03E8 0x0000 0x3 0x0
#define MX6SX_PAD_ENET2_TX_CLK__MLB_CLK 0x00A0 0x03E8 0x07E8 0x4 0x1
#define MX6SX_PAD_ENET2_TX_CLK__GPIO2_IO_9 0x00A0 0x03E8 0x0000 0x5 0x0
#define MX6SX_PAD_ENET2_TX_CLK__USB_OTG2_PWR 0x00A0 0x03E8 0x0000 0x6 0x0
#define MX6SX_PAD_KEY_COL4__SAI2_RX_BCLK 0x00B4 0x03FC 0x0808 0x7 0x0
#define MX6SX_PAD_KEY_ROW0__KPP_ROW_0 0x00B8 0x0400 0x0000 0x0 0x0
#define MX6SX_PAD_KEY_ROW0__USDHC3_WP 0x00B8 0x0400 0x0000 0x1 0x0
-#define MX6SX_PAD_KEY_ROW0__UART6_CTS_B 0x00B8 0x0400 0x0854 0x2 0x3
+#define MX6SX_PAD_KEY_ROW0__UART6_CTS_B 0x00B8 0x0400 0x0000 0x2 0x0
#define MX6SX_PAD_KEY_ROW0__ECSPI1_MOSI 0x00B8 0x0400 0x0718 0x3 0x0
#define MX6SX_PAD_KEY_ROW0__AUDMUX_AUD5_TXD 0x00B8 0x0400 0x0660 0x4 0x0
#define MX6SX_PAD_KEY_ROW0__GPIO2_IO_15 0x00B8 0x0400 0x0000 0x5 0x0
#define MX6SX_PAD_KEY_ROW1__M4_NMI 0x00BC 0x0404 0x0000 0x8 0x0
#define MX6SX_PAD_KEY_ROW2__KPP_ROW_2 0x00C0 0x0408 0x0000 0x0 0x0
#define MX6SX_PAD_KEY_ROW2__USDHC4_WP 0x00C0 0x0408 0x0878 0x1 0x1
-#define MX6SX_PAD_KEY_ROW2__UART5_CTS_B 0x00C0 0x0408 0x084C 0x2 0x3
+#define MX6SX_PAD_KEY_ROW2__UART5_CTS_B 0x00C0 0x0408 0x0000 0x2 0x0
#define MX6SX_PAD_KEY_ROW2__CAN1_RX 0x00C0 0x0408 0x068C 0x3 0x1
#define MX6SX_PAD_KEY_ROW2__CANFD_RX1 0x00C0 0x0408 0x0694 0x4 0x1
#define MX6SX_PAD_KEY_ROW2__GPIO2_IO_17 0x00C0 0x0408 0x0000 0x5 0x0
#define MX6SX_PAD_NAND_DATA05__RAWNAND_DATA05 0x0164 0x04AC 0x0000 0x0 0x0
#define MX6SX_PAD_NAND_DATA05__USDHC2_DATA5 0x0164 0x04AC 0x0000 0x1 0x0
#define MX6SX_PAD_NAND_DATA05__QSPI2_B_DQS 0x0164 0x04AC 0x0000 0x2 0x0
-#define MX6SX_PAD_NAND_DATA05__UART3_CTS_B 0x0164 0x04AC 0x083C 0x3 0x1
+#define MX6SX_PAD_NAND_DATA05__UART3_CTS_B 0x0164 0x04AC 0x0000 0x3 0x0
#define MX6SX_PAD_NAND_DATA05__AUDMUX_AUD4_RXC 0x0164 0x04AC 0x064C 0x4 0x0
#define MX6SX_PAD_NAND_DATA05__GPIO4_IO_9 0x0164 0x04AC 0x0000 0x5 0x0
#define MX6SX_PAD_NAND_DATA05__WEIM_AD_5 0x0164 0x04AC 0x0000 0x6 0x0
#define MX6SX_PAD_QSPI1A_SS1_B__SIM_M_HADDR_12 0x019C 0x04E4 0x0000 0x7 0x0
#define MX6SX_PAD_QSPI1A_SS1_B__SDMA_DEBUG_PC_3 0x019C 0x04E4 0x0000 0x9 0x0
#define MX6SX_PAD_QSPI1B_DATA0__QSPI1_B_DATA_0 0x01A0 0x04E8 0x0000 0x0 0x0
-#define MX6SX_PAD_QSPI1B_DATA0__UART3_CTS_B 0x01A0 0x04E8 0x083C 0x1 0x4
+#define MX6SX_PAD_QSPI1B_DATA0__UART3_CTS_B 0x01A0 0x04E8 0x0000 0x1 0x0
#define MX6SX_PAD_QSPI1B_DATA0__ECSPI3_MOSI 0x01A0 0x04E8 0x0738 0x2 0x1
#define MX6SX_PAD_QSPI1B_DATA0__ESAI_RX_FS 0x01A0 0x04E8 0x0778 0x3 0x2
#define MX6SX_PAD_QSPI1B_DATA0__CSI1_DATA_22 0x01A0 0x04E8 0x06F4 0x4 0x1
#define MX6SX_PAD_SD1_DATA2__AUDMUX_AUD5_TXFS 0x0230 0x0578 0x0670 0x1 0x1
#define MX6SX_PAD_SD1_DATA2__PWM3_OUT 0x0230 0x0578 0x0000 0x2 0x0
#define MX6SX_PAD_SD1_DATA2__GPT_COMPARE2 0x0230 0x0578 0x0000 0x3 0x0
-#define MX6SX_PAD_SD1_DATA2__UART2_CTS_B 0x0230 0x0578 0x0834 0x4 0x2
+#define MX6SX_PAD_SD1_DATA2__UART2_CTS_B 0x0230 0x0578 0x0000 0x4 0x0
#define MX6SX_PAD_SD1_DATA2__GPIO6_IO_4 0x0230 0x0578 0x0000 0x5 0x0
#define MX6SX_PAD_SD1_DATA2__ECSPI4_RDY 0x0230 0x0578 0x0000 0x6 0x0
#define MX6SX_PAD_SD1_DATA2__CCM_OUT0 0x0230 0x0578 0x0000 0x7 0x0
#define MX6SX_PAD_SD2_DATA3__VADC_CLAMP_CURRENT_3 0x024C 0x0594 0x0000 0x8 0x0
#define MX6SX_PAD_SD2_DATA3__MMDC_DEBUG_31 0x024C 0x0594 0x0000 0x9 0x0
#define MX6SX_PAD_SD3_CLK__USDHC3_CLK 0x0250 0x0598 0x0000 0x0 0x0
-#define MX6SX_PAD_SD3_CLK__UART4_CTS_B 0x0250 0x0598 0x0844 0x1 0x0
+#define MX6SX_PAD_SD3_CLK__UART4_CTS_B 0x0250 0x0598 0x0000 0x1 0x0
#define MX6SX_PAD_SD3_CLK__ECSPI4_SCLK 0x0250 0x0598 0x0740 0x2 0x0
#define MX6SX_PAD_SD3_CLK__AUDMUX_AUD6_RXFS 0x0250 0x0598 0x0680 0x3 0x0
#define MX6SX_PAD_SD3_CLK__LCDIF2_VSYNC 0x0250 0x0598 0x0000 0x4 0x0
#define MX6SX_PAD_SD3_DATA7__USDHC3_DATA7 0x0274 0x05BC 0x0000 0x0 0x0
#define MX6SX_PAD_SD3_DATA7__CAN1_RX 0x0274 0x05BC 0x068C 0x1 0x0
#define MX6SX_PAD_SD3_DATA7__CANFD_RX1 0x0274 0x05BC 0x0694 0x2 0x0
-#define MX6SX_PAD_SD3_DATA7__UART3_CTS_B 0x0274 0x05BC 0x083C 0x3 0x3
+#define MX6SX_PAD_SD3_DATA7__UART3_CTS_B 0x0274 0x05BC 0x0000 0x3 0x0
#define MX6SX_PAD_SD3_DATA7__LCDIF2_DATA_5 0x0274 0x05BC 0x0000 0x4 0x0
#define MX6SX_PAD_SD3_DATA7__GPIO7_IO_9 0x0274 0x05BC 0x0000 0x5 0x0
#define MX6SX_PAD_SD3_DATA7__ENET1_1588_EVENT0_IN 0x0274 0x05BC 0x0000 0x6 0x0
#define MX6SX_PAD_SD4_DATA6__SDMA_DEBUG_EVENT_CHANNEL_1 0x0298 0x05E0 0x0000 0x9 0x0
#define MX6SX_PAD_SD4_DATA7__USDHC4_DATA7 0x029C 0x05E4 0x0000 0x0 0x0
#define MX6SX_PAD_SD4_DATA7__RAWNAND_DATA08 0x029C 0x05E4 0x0000 0x1 0x0
-#define MX6SX_PAD_SD4_DATA7__UART5_CTS_B 0x029C 0x05E4 0x084C 0x2 0x1
+#define MX6SX_PAD_SD4_DATA7__UART5_CTS_B 0x029C 0x05E4 0x0000 0x2 0x0
#define MX6SX_PAD_SD4_DATA7__ECSPI3_SS0 0x029C 0x05E4 0x073C 0x3 0x0
#define MX6SX_PAD_SD4_DATA7__LCDIF2_DATA_15 0x029C 0x05E4 0x0000 0x4 0x0
#define MX6SX_PAD_SD4_DATA7__GPIO6_IO_21 0x029C 0x05E4 0x0000 0x5 0x0
renesas,function = "msiof0";
};
- i2c6_pins: i2c6 {
- renesas,groups = "i2c6";
- renesas,function = "i2c6";
- };
-
usb0_pins: usb0 {
renesas,groups = "usb0";
renesas,function = "usb0";
};
&i2c6 {
- pinctrl-names = "default";
- pinctrl-0 = <&i2c6_pins>;
status = "okay";
clock-frequency = <100000>;
&mmc0 { /* sdmmc */
num-slots = <1>;
status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&sd0_clk>, <&sd0_cmd>, <&sd0_cd>, <&sd0_bus4>;
vmmc-supply = <&vcc_sd0>;
slot@0 {
&mmc0 {
num-slots = <1>;
status = "okay";
+ pinctrl-names = "default";
+ pinctrl-0 = <&sd0_clk>, <&sd0_cmd>, <&sd0_cd>, <&sd0_bus4>;
vmmc-supply = <&vcc_sd0>;
slot@0 {
clock-frequency = <100000>;
resets = <&apb2_rst 0>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
i2c1: i2c@01c2b000 {
clock-frequency = <100000>;
resets = <&apb2_rst 1>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
i2c2: i2c@01c2b400 {
clock-frequency = <100000>;
resets = <&apb2_rst 2>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
i2c3: i2c@01c2b800 {
clock-frequency = <100000>;
resets = <&apb2_rst 3>;
status = "disabled";
+ #address-cells = <1>;
+ #size-cells = <0>;
};
gmac: ethernet@01c30000 {
vcc4-supply = <&sys_3v3_reg>;
vcc5-supply = <&sys_3v3_reg>;
vcc6-supply = <&vio_reg>;
- vcc7-supply = <&sys_5v0_reg>;
+ vcc7-supply = <&charge_pump_5v0_reg>;
vccio-supply = <&sys_3v3_reg>;
regulators {
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
+
+ charge_pump_5v0_reg: regulator@101 {
+ compatible = "regulator-fixed";
+ reg = <101>;
+ regulator-name = "5v0";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ };
};
};
vcc4-supply = <&sys_3v3_reg>;
vcc5-supply = <&sys_3v3_reg>;
vcc6-supply = <&vio_reg>;
- vcc7-supply = <&sys_5v0_reg>;
+ vcc7-supply = <&charge_pump_5v0_reg>;
vccio-supply = <&sys_3v3_reg>;
regulators {
regulator-max-microvolt = <3300000>;
regulator-always-on;
};
+
+ charge_pump_5v0_reg: regulator@101 {
+ compatible = "regulator-fixed";
+ reg = <101>;
+ regulator-name = "5v0";
+ regulator-min-microvolt = <5000000>;
+ regulator-max-microvolt = <5000000>;
+ regulator-always-on;
+ };
};
};
};
pinctrl_esdhc1: esdhc1grp {
- fsl,fsl,pins = <
+ fsl,pins = <
VF610_PAD_PTA24__ESDHC1_CLK 0x31ef
VF610_PAD_PTA25__ESDHC1_CMD 0x31ef
VF610_PAD_PTA26__ESDHC1_DAT0 0x31ef
config SOC_IMX27
bool
- select ARCH_HAS_OPP
select CPU_ARM926T
select IMX_HAVE_IOMUX_V1
select MXC_AVIC
config SOC_IMX5
bool
- select ARCH_HAS_OPP
select HAVE_IMX_SRC
select MXC_TZIC
obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o
obj-$(CONFIG_HAVE_IMX_MMDC) += mmdc.o
obj-$(CONFIG_HAVE_IMX_SRC) += src.o
+ifdef CONFIG_SOC_IMX6
AFLAGS_headsmp.o :=-Wa,-march=armv7-a
obj-$(CONFIG_SMP) += headsmp.o platsmp.o
obj-$(CONFIG_HOTPLUG_CPU) += hotplug.o
+endif
obj-$(CONFIG_SOC_IMX6Q) += clk-imx6q.o mach-imx6q.o
obj-$(CONFIG_SOC_IMX6SL) += clk-imx6sl.o mach-imx6sl.o
obj-$(CONFIG_SOC_IMX6SX) += clk-imx6sx.o mach-imx6sx.o
clk[IMX6QDL_CLK_PLL3_80M] = imx_clk_fixed_factor("pll3_80m", "pll3_usb_otg", 1, 6);
clk[IMX6QDL_CLK_PLL3_60M] = imx_clk_fixed_factor("pll3_60m", "pll3_usb_otg", 1, 8);
clk[IMX6QDL_CLK_TWD] = imx_clk_fixed_factor("twd", "arm", 1, 2);
+ if (cpu_is_imx6dl()) {
+ clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_fixed_factor("gpu2d_axi", "mmdc_ch0_axi_podf", 1, 1);
+ clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_fixed_factor("gpu3d_axi", "mmdc_ch0_axi_podf", 1, 1);
+ }
clk[IMX6QDL_CLK_PLL4_POST_DIV] = clk_register_divider_table(NULL, "pll4_post_div", "pll4_audio", CLK_SET_RATE_PARENT, base + 0x70, 19, 2, 0, post_div_table, &imx_ccm_lock);
clk[IMX6QDL_CLK_PLL4_AUDIO_DIV] = clk_register_divider(NULL, "pll4_audio_div", "pll4_post_div", CLK_SET_RATE_PARENT, base + 0x170, 15, 1, 0, &imx_ccm_lock);
clk[IMX6QDL_CLK_ESAI_SEL] = imx_clk_mux("esai_sel", base + 0x20, 19, 2, audio_sels, ARRAY_SIZE(audio_sels));
clk[IMX6QDL_CLK_ASRC_SEL] = imx_clk_mux("asrc_sel", base + 0x30, 7, 2, audio_sels, ARRAY_SIZE(audio_sels));
clk[IMX6QDL_CLK_SPDIF_SEL] = imx_clk_mux("spdif_sel", base + 0x30, 20, 2, audio_sels, ARRAY_SIZE(audio_sels));
- clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_mux("gpu2d_axi", base + 0x18, 0, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
- clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_mux("gpu3d_axi", base + 0x18, 1, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
+ if (cpu_is_imx6q()) {
+ clk[IMX6QDL_CLK_GPU2D_AXI] = imx_clk_mux("gpu2d_axi", base + 0x18, 0, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
+ clk[IMX6QDL_CLK_GPU3D_AXI] = imx_clk_mux("gpu3d_axi", base + 0x18, 1, 1, gpu_axi_sels, ARRAY_SIZE(gpu_axi_sels));
+ }
clk[IMX6QDL_CLK_GPU2D_CORE_SEL] = imx_clk_mux("gpu2d_core_sel", base + 0x18, 16, 2, gpu2d_core_sels, ARRAY_SIZE(gpu2d_core_sels));
clk[IMX6QDL_CLK_GPU3D_CORE_SEL] = imx_clk_mux("gpu3d_core_sel", base + 0x18, 4, 2, gpu3d_core_sels, ARRAY_SIZE(gpu3d_core_sels));
clk[IMX6QDL_CLK_GPU3D_SHADER_SEL] = imx_clk_mux("gpu3d_shader_sel", base + 0x18, 8, 2, gpu3d_shader_sels, ARRAY_SIZE(gpu3d_shader_sels));
ldr r6, [r11, #0x0]
ldr r11, [r0, #PM_INFO_MX6Q_GPC_V_OFFSET]
ldr r6, [r11, #0x0]
+ ldr r11, [r0, #PM_INFO_MX6Q_IOMUXC_V_OFFSET]
+ ldr r6, [r11, #0x0]
/* use r11 to store the IO address */
ldr r11, [r0, #PM_INFO_MX6Q_SRC_V_OFFSET]
select ARM_CPU_SUSPEND if PM || CPU_IDLE
select CPU_V7
select SH_CLK_CPG
+ select SH_INTC
select SYS_SUPPORTS_SH_CMT
select SYS_SUPPORTS_SH_TMU
select CPU_V7
select I2C
select SH_CLK_CPG
+ select SH_INTC
select RENESAS_INTC_IRQPIN
select SYS_SUPPORTS_SH_CMT
select SYS_SUPPORTS_SH_TMU
# The byte offset of the kernel image in RAM from the start of RAM.
ifeq ($(CONFIG_ARM64_RANDOMIZE_TEXT_OFFSET), y)
-TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%04x0\n", int(65535 * rand())}')
+TEXT_OFFSET := $(shell awk 'BEGIN {srand(); printf "0x%03x000\n", int(512 * rand())}')
else
TEXT_OFFSET := 0x00080000
endif
CONFIG_BLK_DEV_SD=y
# CONFIG_SCSI_LOWLEVEL is not set
CONFIG_ATA=y
+CONFIG_AHCI_XGENE=y
+CONFIG_PHY_XGENE=y
CONFIG_PATA_PLATFORM=y
CONFIG_PATA_OF_PLATFORM=y
CONFIG_NETDEVICES=y
CONFIG_VIRTIO_NET=y
CONFIG_SMC91X=y
CONFIG_SMSC911X=y
+CONFIG_NET_XGENE=y
# CONFIG_WLAN is not set
CONFIG_INPUT_EVDEV=y
# CONFIG_SERIO_SERPORT is not set
#define __ASM_SPARSEMEM_H
#ifdef CONFIG_SPARSEMEM
-#define MAX_PHYSMEM_BITS 40
+#define MAX_PHYSMEM_BITS 48
#define SECTION_SIZE_BITS 30
#endif
#define __ARM_NR_compat_cacheflush (__ARM_NR_COMPAT_BASE+2)
#define __ARM_NR_compat_set_tls (__ARM_NR_COMPAT_BASE+5)
-#define __NR_compat_syscalls 383
+#define __NR_compat_syscalls 386
#endif
#define __ARCH_WANT_SYS_CLONE
__SYSCALL(__NR_sched_getattr, sys_sched_getattr)
#define __NR_renameat2 382
__SYSCALL(__NR_renameat2, sys_renameat2)
+ /* 383 for seccomp */
+#define __NR_getrandom 384
+__SYSCALL(__NR_getrandom, sys_getrandom)
+#define __NR_memfd_create 385
+__SYSCALL(__NR_memfd_create, sys_memfd_create)
if (l1ip != ICACHE_POLICY_PIPT)
set_bit(ICACHEF_ALIASING, &__icache_flags);
- if (l1ip == ICACHE_POLICY_AIVIVT);
+ if (l1ip == ICACHE_POLICY_AIVIVT)
set_bit(ICACHEF_AIVIVT, &__icache_flags);
pr_info("Detected %s I-cache on CPU%d\n", icache_policy_str[l1ip], cpu);
if (uefi_debug)
pr_cont("\n");
}
+
+ set_bit(EFI_MEMMAP, &efi.flags);
}
efi_native_runtime_setup();
set_bit(EFI_RUNTIME_SERVICES, &efi.flags);
+ efi.runtime_version = efi.systab->hdr.revision;
+
return 0;
err_unmap:
#define KERNEL_RAM_VADDR (PAGE_OFFSET + TEXT_OFFSET)
-#if (TEXT_OFFSET & 0xf) != 0
-#error TEXT_OFFSET must be at least 16B aligned
-#elif (PAGE_OFFSET & 0xfffff) != 0
+#if (TEXT_OFFSET & 0xfff) != 0
+#error TEXT_OFFSET must be at least 4KB aligned
+#elif (PAGE_OFFSET & 0x1fffff) != 0
#error PAGE_OFFSET must be at least 2MB aligned
-#elif TEXT_OFFSET > 0xfffff
+#elif TEXT_OFFSET > 0x1fffff
#error TEXT_OFFSET must be less than 2MB
#endif
if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
trace_sys_enter(regs, regs->syscallno);
-#ifdef CONFIG_AUDITSYSCALL
audit_syscall_entry(syscall_get_arch(), regs->syscallno,
regs->orig_x0, regs->regs[1], regs->regs[2], regs->regs[3]);
-#endif
return regs->syscallno;
}
asmlinkage void syscall_trace_exit(struct pt_regs *regs)
{
-#ifdef CONFIG_AUDITSYSCALL
audit_syscall_exit(regs);
-#endif
if (test_thread_flag(TIF_SYSCALL_TRACEPOINT))
trace_sys_exit(regs, regs_return_value(regs));
#include <linux/of_fdt.h>
#include <linux/dma-mapping.h>
#include <linux/dma-contiguous.h>
+#include <linux/efi.h>
#include <asm/fixmap.h>
#include <asm/sections.h>
memblock_reserve(__virt_to_phys(initrd_start), initrd_end - initrd_start);
#endif
- early_init_fdt_scan_reserved_mem();
+ if (!efi_enabled(EFI_MEMMAP))
+ early_init_fdt_scan_reserved_mem();
/* 4GB maximum for 32-bit only capable devices */
if (IS_ENABLED(CONFIG_ZONE_DMA))
pr_warn("DB1200: cant get I2C close to 50MHz\n");
else
clk_set_rate(c, pfc);
+ clk_prepare_enable(c);
clk_put(c);
}
}
/* Audio PSC clock is supplied externally. (FIXME: platdata!!) */
- c = clk_get(NULL, "psc1_intclk");
- if (!IS_ERR(c)) {
- clk_prepare_enable(c);
- clk_put(c);
- }
__raw_writel(PSC_SEL_CLK_SERCLK,
(void __iomem *)KSEG1ADDR(AU1550_PSC1_PHYS_ADDR) + PSC_SEL_OFFSET);
wmb();
switch (bcm47xx_bus_type) {
#ifdef CONFIG_BCM47XX_SSB
case BCM47XX_BUS_TYPE_SSB:
- ssb_watchdog_timer_set(&bcm47xx_bus.ssb, 3);
+ if (bcm47xx_bus.ssb.chip_id == 0x4785)
+ write_c0_diag4(1 << 22);
+ ssb_watchdog_timer_set(&bcm47xx_bus.ssb, 1);
+ if (bcm47xx_bus.ssb.chip_id == 0x4785) {
+ __asm__ __volatile__(
+ ".set\tmips3\n\t"
+ "sync\n\t"
+ "wait\n\t"
+ ".set\tmips0");
+ }
break;
#endif
#ifdef CONFIG_BCM47XX_BCMA
case BCM47XX_BUS_TYPE_BCMA:
- bcma_chipco_watchdog_timer_set(&bcm47xx_bus.bcma.bus.drv_cc, 3);
+ bcma_chipco_watchdog_timer_set(&bcm47xx_bus.bcma.bus.drv_cc, 1);
break;
#endif
}
static int octeon_uart;
extern asmlinkage void handle_int(void);
-extern asmlinkage void plat_irq_dispatch(void);
/**
* Return non zero if we are currently running in the Octeon simulator
octeon_kill_core(NULL);
}
+static char __read_mostly octeon_system_type[80];
+
+static int __init init_octeon_system_type(void)
+{
+ snprintf(octeon_system_type, sizeof(octeon_system_type), "%s (%s)",
+ cvmx_board_type_to_string(octeon_bootinfo->board_type),
+ octeon_model_get_string(read_c0_prid()));
+
+ return 0;
+}
+early_initcall(init_octeon_system_type);
+
/**
* Return a string representing the system type
*
*/
const char *octeon_board_type_string(void)
{
- static char name[80];
- sprintf(name, "%s (%s)",
- cvmx_board_type_to_string(octeon_bootinfo->board_type),
- octeon_model_get_string(read_c0_prid()));
- return name;
+ return octeon_system_type;
}
const char *get_system_type(void)
--- /dev/null
+/*
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ *
+ * Copyright (C) 2014, Imagination Technologies Ltd.
+ *
+ * EVA functions for generic code
+ */
+
+#ifndef _ASM_EVA_H
+#define _ASM_EVA_H
+
+#include <kernel-entry-init.h>
+
+#ifdef __ASSEMBLY__
+
+#ifdef CONFIG_EVA
+
+/*
+ * EVA early init code
+ *
+ * Platforms must define their own 'platform_eva_init' macro in
+ * their kernel-entry-init.h header. This macro usually does the
+ * platform specific configuration of the segmentation registers,
+ * and it is normally called from assembly code.
+ *
+ */
+
+.macro eva_init
+platform_eva_init
+.endm
+
+#else
+
+.macro eva_init
+.endm
+
+#endif /* CONFIG_EVA */
+
+#endif /* __ASSEMBLY__ */
+
+#endif
#endif
#define GICBIS(reg, mask, bits) \
do { u32 data; \
- GICREAD((reg), data); \
+ GICREAD(reg, data); \
data &= ~(mask); \
data |= ((bits) & (mask)); \
GICWRITE((reg), data); \
#define irq_canonicalize(irq) (irq) /* Sane hardware, sane code ... */
#endif
+asmlinkage void plat_irq_dispatch(void);
+
extern void do_IRQ(unsigned int irq);
extern void arch_init_irq(void);
#ifndef __ASM_MACH_MIPS_KERNEL_ENTRY_INIT_H
#define __ASM_MACH_MIPS_KERNEL_ENTRY_INIT_H
+#include <asm/regdef.h>
+#include <asm/mipsregs.h>
+
/*
* Prepare segments for EVA boot:
*
* This is in case the processor boots in legacy configuration
* (SI_EVAReset is de-asserted and CONFIG5.K == 0)
*
- * On entry, t1 is loaded with CP0_CONFIG
- *
* ========================= Mappings =============================
* Virtual memory Physical memory Mapping
* 0x00000000 - 0x7fffffff 0x80000000 - 0xfffffffff MUSUK (kuseg)
*
*
* Lowmem is expanded to 2GB
+ *
+ * The following code uses the t0, t1, t2 and ra registers without
+ * previously preserving them.
+ *
*/
- .macro eva_entry
+ .macro platform_eva_init
+
+ .set push
+ .set reorder
/*
* Get Config.K0 value and use it to program
* the segmentation registers
*/
+ mfc0 t1, CP0_CONFIG
andi t1, 0x7 /* CCA */
move t2, t1
ins t2, t1, 16, 3
mtc0 t0, $16, 5
sync
jal mips_ihb
+
+ .set pop
.endm
.macro kernel_entry_setup
sll t0, t0, 6 /* SC bit */
bgez t0, 9f
- eva_entry
+ platform_eva_init
b 0f
9:
/* Assume we came from YAMON... */
#ifdef CONFIG_EVA
sync
ehb
- mfc0 t1, CP0_CONFIG
- eva_entry
+ platform_eva_init
#endif
.endm
#include <asm/mach-netlogic/multi-node.h>
-#ifdef CONFIG_SMP
-#define topology_physical_package_id(cpu) cpu_to_node(cpu)
-#define topology_core_id(cpu) (cpu_logical_map(cpu) / NLM_THREADS_PER_CORE)
-#define topology_thread_cpumask(cpu) (&cpu_sibling_map[cpu])
-#define topology_core_cpumask(cpu) cpumask_of_node(cpu_to_node(cpu))
-#endif
-
#include <asm-generic/topology.h>
#endif /* _ASM_MACH_NETLOGIC_TOPOLOGY_H */
} \
} while(0)
+extern void set_pte_at(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
+ pte_t pteval);
+
#if defined(CONFIG_64BIT_PHYS_ADDR) && defined(CONFIG_CPU_MIPS32)
#define pte_none(pte) (!(((pte).pte_low | (pte).pte_high) & ~_PAGE_GLOBAL))
}
}
}
-#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
}
#endif
}
-#define set_pte_at(mm, addr, ptep, pteval) set_pte(ptep, pteval)
static inline void pte_clear(struct mm_struct *mm, unsigned long addr, pte_t *ptep)
{
extern void __update_tlb(struct vm_area_struct *vma, unsigned long address,
pte_t pte);
-extern void __update_cache(struct vm_area_struct *vma, unsigned long address,
- pte_t pte);
static inline void update_mmu_cache(struct vm_area_struct *vma,
unsigned long address, pte_t *ptep)
{
pte_t pte = *ptep;
__update_tlb(vma, address, pte);
- __update_cache(vma, address, pte);
}
static inline void update_mmu_cache_pmd(struct vm_area_struct *vma,
{
int arch = EM_MIPS;
#ifdef CONFIG_64BIT
- if (!test_thread_flag(TIF_32BIT_REGS))
+ if (!test_thread_flag(TIF_32BIT_REGS)) {
arch |= __AUDIT_ARCH_64BIT;
- if (test_thread_flag(TIF_32BIT_ADDR))
- arch |= __AUDIT_ARCH_CONVENTION_MIPS64_N32;
+ /* N32 sets only TIF_32BIT_ADDR */
+ if (test_thread_flag(TIF_32BIT_ADDR))
+ arch |= __AUDIT_ARCH_CONVENTION_MIPS64_N32;
+ }
#endif
#if defined(__LITTLE_ENDIAN)
arch |= __AUDIT_ARCH_LE;
#include <asm/asm-offsets.h>
#include <asm/asmmacro.h>
#include <asm/cacheops.h>
+#include <asm/eva.h>
#include <asm/mipsregs.h>
#include <asm/mipsmtregs.h>
#include <asm/pm.h>
1: jal mips_cps_core_init
nop
+ /* Do any EVA initialization if necessary */
+ eva_init
+
/*
* Boot any other VPEs within this core that should be online, and
* deactivate this VPE if it should be offline.
if (mipspmu.irq >= 0) {
/* Request my own irq handler. */
err = request_irq(mipspmu.irq, mipsxx_pmu_handle_irq,
- IRQF_PERCPU | IRQF_NOBALANCING,
+ IRQF_PERCPU | IRQF_NOBALANCING | IRQF_NO_THREAD,
"mips_perf_pmu", NULL);
if (err) {
pr_warning("Unable to request IRQ%d for MIPS "
move s0, t2 # Save syscall pointer
move a0, sp
/*
- * syscall number is in v0 unless we called syscall(__NR_###)
+ * absolute syscall number is in v0 unless we called syscall(__NR_###)
* where the real syscall number is in a0
* note: NR_syscall is the first O32 syscall but the macro is
* only defined when compiling with -mabi=32 (CONFIG_32BIT)
* therefore __NR_O32_Linux is used (4000)
*/
- addiu a1, v0, __NR_O32_Linux
- bnez v0, 1f /* __NR_syscall at offset 0 */
- lw a1, PT_R4(sp)
+ .set push
+ .set reorder
+ subu t1, v0, __NR_O32_Linux
+ move a1, v0
+ bnez t1, 1f /* __NR_syscall at offset 0 */
+ lw a1, PT_R4(sp) /* Arg1 for __NR_syscall case */
+ .set pop
1: jal syscall_trace_enter
static int loongson_cu2_call(struct notifier_block *nfb, unsigned long action,
void *data)
{
- int fpu_enabled;
+ int fpu_owned;
int fr = !test_thread_flag(TIF_32BIT_FPREGS);
switch (action) {
case CU2_EXCEPTION:
preempt_disable();
- fpu_enabled = read_c0_status() & ST0_CU1;
+ fpu_owned = __is_fpu_owner();
if (!fr)
set_c0_status(ST0_CU1 | ST0_CU2);
else
KSTK_STATUS(current) |= ST0_FR;
else
KSTK_STATUS(current) &= ~ST0_FR;
- /* If FPU is enabled, we needn't init or restore fp */
- if(!fpu_enabled) {
+ /* If FPU is owned, we needn't init or restore fp */
+ if (!fpu_owned) {
set_thread_flag(TIF_USEDFPU);
if (!used_math()) {
_init_fpu();
#include <asm/page.h>
#include <asm/pgalloc.h>
#include <asm/sections.h>
-#include <linux/bootmem.h>
-#include <linux/init.h>
#include <linux/irq.h>
#include <asm/bootinfo.h>
#include <asm/mc146818-time.h>
EXPORT_SYMBOL(__flush_anon_page);
-void __update_cache(struct vm_area_struct *vma, unsigned long address,
- pte_t pte)
+static void mips_flush_dcache_from_pte(pte_t pteval, unsigned long address)
{
struct page *page;
- unsigned long pfn, addr;
- int exec = (vma->vm_flags & VM_EXEC) && !cpu_has_ic_fills_f_dc;
+ unsigned long pfn = pte_pfn(pteval);
- pfn = pte_pfn(pte);
if (unlikely(!pfn_valid(pfn)))
return;
+
page = pfn_to_page(pfn);
if (page_mapping(page) && Page_dcache_dirty(page)) {
- addr = (unsigned long) page_address(page);
- if (exec || pages_do_alias(addr, address & PAGE_MASK))
- flush_data_cache_page(addr);
+ unsigned long page_addr = (unsigned long) page_address(page);
+
+ if (!cpu_has_ic_fills_f_dc ||
+ pages_do_alias(page_addr, address & PAGE_MASK))
+ flush_data_cache_page(page_addr);
ClearPageDcacheDirty(page);
}
}
+void set_pte_at(struct mm_struct *mm, unsigned long addr,
+ pte_t *ptep, pte_t pteval)
+{
+ if (cpu_has_dc_aliases || !cpu_has_ic_fills_f_dc) {
+ if (pte_present(pteval))
+ mips_flush_dcache_from_pte(pteval, addr);
+ }
+
+ set_pte(ptep, pteval);
+}
+
unsigned long _page_cachable_default;
EXPORT_SYMBOL(_page_cachable_default);
/* otherwise look in the environment */
memsize_str = fw_getenv("memsize");
- if (memsize_str)
- tmp = kstrtol(memsize_str, 0, &memsize);
+ if (memsize_str) {
+ tmp = kstrtoul(memsize_str, 0, &memsize);
+ if (tmp)
+ pr_warn("Failed to read the 'memsize' env variable.\n");
+ }
if (eva) {
/* Look for ememsize for EVA */
ememsize_str = fw_getenv("ememsize");
- if (ememsize_str)
- tmp = kstrtol(ememsize_str, 0, &ememsize);
+ if (ememsize_str) {
+ tmp = kstrtoul(ememsize_str, 0, &ememsize);
+ if (tmp)
+ pr_warn("Failed to read the 'ememsize' env variable.\n");
+ }
}
if (!memsize && !ememsize) {
pr_warn("memsize not set in YAMON, set to default (32Mb)\n");
* the range 40-71.
*/
-asmlinkage void plat_irq_dispatch(struct pt_regs *regs)
+asmlinkage void plat_irq_dispatch(void)
{
u32 pending;
#define __NR_sched_setattr 345
#define __NR_sched_getattr 346
#define __NR_renameat2 347
-#define NR_syscalls 348
+#define __NR_seccomp 348
+#define __NR_getrandom 349
+#define __NR_memfd_create 350
+#define NR_syscalls 351
/*
* There are some system calls that are not present on 64 bit, some
COMPAT_SYSCALL_WRAP3(sched_setattr, pid_t, pid, struct sched_attr __user *, attr, unsigned int, flags);
COMPAT_SYSCALL_WRAP4(sched_getattr, pid_t, pid, struct sched_attr __user *, attr, unsigned int, size, unsigned int, flags);
COMPAT_SYSCALL_WRAP5(renameat2, int, olddfd, const char __user *, oldname, int, newdfd, const char __user *, newname, unsigned int, flags);
+COMPAT_SYSCALL_WRAP3(seccomp, unsigned int, op, unsigned int, flags, const char __user *, uargs)
+COMPAT_SYSCALL_WRAP3(getrandom, char __user *, buf, size_t, count, unsigned int, flags)
+COMPAT_SYSCALL_WRAP2(memfd_create, const char __user *, uname, unsigned int, flags)
S390_lowcore.program_new_psw.addr =
PSW_ADDR_AMODE | (unsigned long) s390_base_pgm_handler;
+ /*
+ * Clear subchannel ID and number to signal new kernel that no CCW or
+ * SCSI IPL has been done (for kexec and kdump)
+ */
+ S390_lowcore.subchannel_id = 0;
+ S390_lowcore.subchannel_nr = 0;
+
/* Store status at absolute zero */
store_status();
#include <linux/stddef.h>
#include <linux/unistd.h>
#include <linux/ptrace.h>
+#include <linux/random.h>
#include <linux/user.h>
#include <linux/tty.h>
#include <linux/ioport.h>
#include <asm/diag.h>
#include <asm/os_info.h>
#include <asm/sclp.h>
+#include <asm/sysinfo.h>
#include "entry.h"
/*
#endif
get_cpu_id(&cpu_id);
+ add_device_randomness(&cpu_id, sizeof(cpu_id));
switch (cpu_id.machine) {
case 0x9672:
#if !defined(CONFIG_64BIT)
}
/*
+ * Add system information as device randomness
+ */
+static void __init setup_randomness(void)
+{
+ struct sysinfo_3_2_2 *vmms;
+
+ vmms = (struct sysinfo_3_2_2 *) alloc_page(GFP_KERNEL);
+ if (vmms && stsi(vmms, 3, 2, 2) == 0 && vmms->count)
+ add_device_randomness(&vmms, vmms->count);
+ free_page((unsigned long) vmms);
+}
+
+/*
* Setup function called from init/main.c just after the banner
* was printed.
*/
/* Setup zfcpdump support */
setup_zfcpdump();
+
+ /* Add system specific data to the random pool */
+ setup_randomness();
}
#ifdef CONFIG_32BIT
SYSCALL(sys_sched_setattr,sys_sched_setattr,compat_sys_sched_setattr) /* 345 */
SYSCALL(sys_sched_getattr,sys_sched_getattr,compat_sys_sched_getattr)
SYSCALL(sys_renameat2,sys_renameat2,compat_sys_renameat2)
+SYSCALL(sys_seccomp,sys_seccomp,compat_sys_seccomp)
+SYSCALL(sys_getrandom,sys_getrandom,compat_sys_getrandom)
+SYSCALL(sys_memfd_create,sys_memfd_create,compat_sys_memfd_create) /* 350 */
#
config CPU_SH2
bool
+ select SH_INTC
config CPU_SH2A
bool
bool
select CPU_HAS_INTEVT
select CPU_HAS_SR_RB
+ select SH_INTC
select SYS_SUPPORTS_SH_TMU
config CPU_SH4
select CPU_HAS_INTEVT
select CPU_HAS_SR_RB
select CPU_HAS_FPU if !CPU_SH4AL_DSP
+ select SH_INTC
select SYS_SUPPORTS_SH_TMU
select SYS_SUPPORTS_HUGETLBFS if MMU
sysenter_badsys:
movl $-ENOSYS,%eax
jmp sysenter_after_call
-END(syscall_badsys)
+END(sysenter_badsys)
CFI_ENDPROC
.macro FIXUP_ESPFIX_STACK
if (cpumask_test_cpu(cpu, mm_cpumask(active_mm))) {
cpumask_clear_cpu(cpu, mm_cpumask(active_mm));
load_cr3(swapper_pg_dir);
- trace_tlb_flush(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
+ /*
+ * This gets called in the idle path where RCU
+ * functions differently. Tracing normally
+ * uses RCU, so we have to call the tracepoint
+ * specially here.
+ */
+ trace_tlb_flush_rcuidle(TLB_FLUSH_ON_TASK_SWITCH, TLB_FLUSH_ALL);
}
}
EXPORT_SYMBOL_GPL(leave_mm);
*
* This is in units of pages.
*/
-unsigned long tlb_single_page_flush_ceiling = 33;
+static unsigned long tlb_single_page_flush_ceiling __read_mostly = 33;
void flush_tlb_mm_range(struct mm_struct *mm, unsigned long start,
unsigned long end, unsigned long vmflag)
#include <linux/module.h>
#include <linux/of_device.h>
#include <linux/platform_device.h>
-#include <linux/tegra-powergate.h>
#include <linux/regulator/consumer.h>
+#include <soc/tegra/pmc.h>
#include "ahci.h"
#define SATA_CONFIGURATION_0 0x180
};
static const struct ata_port_info xgene_ahci_port_info = {
- .flags = AHCI_FLAG_COMMON | ATA_FLAG_NCQ,
+ .flags = AHCI_FLAG_COMMON,
.pio_mask = ATA_PIO4,
.udma_mask = ATA_UDMA6,
.port_ops = &xgene_ahci_ops,
/* Configure the host controller */
xgene_ahci_hw_init(hpriv);
- hpriv->flags = AHCI_HFLAG_NO_PMP | AHCI_HFLAG_YES_NCQ;
+ hpriv->flags = AHCI_HFLAG_NO_PMP | AHCI_HFLAG_NO_NCQ;
rc = ahci_platform_init_host(pdev, hpriv, &xgene_ahci_port_info);
if (rc)
{ "Micron_M500*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
{ "Crucial_CT???M500SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
{ "Micron_M550*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
- { "Crucial_CT???M550SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
+ { "Crucial_CT*M550SSD*", NULL, ATA_HORKAGE_NO_NCQ_TRIM, },
/*
* Some WD SATA-I drives spin up and down erratically when the link
/*
* pata_s3c_bus_softreset - PATA device software reset
*/
-static unsigned int pata_s3c_bus_softreset(struct ata_port *ap,
+static int pata_s3c_bus_softreset(struct ata_port *ap,
unsigned long deadline)
{
struct ata_ioports *ioaddr = &ap->ioaddr;
* Note: Original code is ata_bus_softreset().
*/
-static unsigned int scc_bus_softreset(struct ata_port *ap, unsigned int devmask,
+static int scc_bus_softreset(struct ata_port *ap, unsigned int devmask,
unsigned long deadline)
{
struct ata_ioports *ioaddr = &ap->ioaddr;
udelay(20);
out_be32(ioaddr->ctl_addr, ap->ctl);
- scc_wait_after_reset(&ap->link, devmask, deadline);
-
- return 0;
+ return scc_wait_after_reset(&ap->link, devmask, deadline);
}
/**
{
struct ata_port *ap = link->ap;
unsigned int slave_possible = ap->flags & ATA_FLAG_SLAVE_POSS;
- unsigned int devmask = 0, err_mask;
+ unsigned int devmask = 0;
+ int rc;
u8 err;
DPRINTK("ENTER\n");
/* issue bus reset */
DPRINTK("about to softreset, devmask=%x\n", devmask);
- err_mask = scc_bus_softreset(ap, devmask, deadline);
- if (err_mask) {
- ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", err_mask);
+ rc = scc_bus_softreset(ap, devmask, deadline);
+ if (rc) {
+ ata_port_err(ap, "SRST failed (err_mask=0x%x)\n", rc);
return -EIO;
}
}
if (e->num_vcs && vc >= e->num_vcs) {
dev_warn(ccn->dev, "Invalid vc %d for node/XP %d!\n",
- port, node_xp);
+ vc, node_xp);
return -EINVAL;
}
valid = 1;
*/
static void efivar_entry_list_del_unlock(struct efivar_entry *entry)
{
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
list_del(&entry->list);
spin_unlock_irq(&__efivars->lock);
const struct efivar_operations *ops = __efivars->ops;
efi_status_t status;
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
status = ops->set_variable(entry->var.VariableName,
&entry->var.VendorGuid,
int strsize1, strsize2;
bool found = false;
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
list_for_each_entry_safe(entry, n, head, list) {
strsize1 = ucs2_strsize(name, 1024);
const struct efivar_operations *ops = __efivars->ops;
efi_status_t status;
- WARN_ON(!spin_is_locked(&__efivars->lock));
+ lockdep_assert_held(&__efivars->lock);
status = ops->get_variable(entry->var.VariableName,
&entry->var.VendorGuid,
struct gpio_desc **dr;
struct gpio_desc *desc;
- dr = devres_alloc(devm_gpiod_release, sizeof(struct gpiod_desc *),
+ dr = devres_alloc(devm_gpiod_release, sizeof(struct gpio_desc *),
GFP_KERNEL);
if (!dr)
return ERR_PTR(-ENOMEM);
return 0;
}
+static int lp_gpio_resume(struct device *dev)
+{
+ struct platform_device *pdev = to_platform_device(dev);
+ struct lp_gpio *lg = platform_get_drvdata(pdev);
+ unsigned long reg;
+ int i;
+
+ /* on some hardware suspend clears input sensing, re-enable it here */
+ for (i = 0; i < lg->chip.ngpio; i++) {
+ if (gpiochip_is_requested(&lg->chip, i) != NULL) {
+ reg = lp_gpio_reg(&lg->chip, i, LP_CONFIG2);
+ outl(inl(reg) & ~GPINDIS_BIT, reg);
+ }
+ }
+ return 0;
+}
+
static const struct dev_pm_ops lp_gpio_pm_ops = {
.runtime_suspend = lp_gpio_runtime_suspend,
.runtime_resume = lp_gpio_runtime_resume,
+ .resume = lp_gpio_resume,
};
static const struct acpi_device_id lynxpoint_gpio_acpi_match[] = {
struct clk *clk;
};
+static struct irq_chip zynq_gpio_level_irqchip;
+static struct irq_chip zynq_gpio_edge_irqchip;
+
/**
* zynq_gpio_get_bank_pin - Get the bank number and pin number within that bank
* for a given pin in the GPIO device
gpio->base_addr + ZYNQ_GPIO_INTPOL_OFFSET(bank_num));
writel_relaxed(int_any,
gpio->base_addr + ZYNQ_GPIO_INTANY_OFFSET(bank_num));
+
+ if (type & IRQ_TYPE_LEVEL_MASK) {
+ __irq_set_chip_handler_name_locked(irq_data->irq,
+ &zynq_gpio_level_irqchip, handle_fasteoi_irq, NULL);
+ } else {
+ __irq_set_chip_handler_name_locked(irq_data->irq,
+ &zynq_gpio_edge_irqchip, handle_level_irq, NULL);
+ }
+
return 0;
}
}
/* irq chip descriptor */
-static struct irq_chip zynq_gpio_irqchip = {
+static struct irq_chip zynq_gpio_level_irqchip = {
.name = DRIVER_NAME,
.irq_enable = zynq_gpio_irq_enable,
+ .irq_eoi = zynq_gpio_irq_ack,
+ .irq_mask = zynq_gpio_irq_mask,
+ .irq_unmask = zynq_gpio_irq_unmask,
+ .irq_set_type = zynq_gpio_set_irq_type,
+ .irq_set_wake = zynq_gpio_set_wake,
+ .flags = IRQCHIP_EOI_THREADED | IRQCHIP_EOI_IF_HANDLED,
+};
+
+static struct irq_chip zynq_gpio_edge_irqchip = {
+ .name = DRIVER_NAME,
+ .irq_enable = zynq_gpio_irq_enable,
+ .irq_ack = zynq_gpio_irq_ack,
.irq_mask = zynq_gpio_irq_mask,
.irq_unmask = zynq_gpio_irq_unmask,
.irq_set_type = zynq_gpio_set_irq_type,
offset);
generic_handle_irq(gpio_irq);
}
-
- /* clear IRQ in HW */
- writel_relaxed(int_sts, gpio->base_addr +
- ZYNQ_GPIO_INTSTS_OFFSET(bank_num));
}
}
writel_relaxed(ZYNQ_GPIO_IXR_DISABLE_ALL, gpio->base_addr +
ZYNQ_GPIO_INTDIS_OFFSET(bank_num));
- ret = gpiochip_irqchip_add(chip, &zynq_gpio_irqchip, 0,
- handle_simple_irq, IRQ_TYPE_NONE);
+ ret = gpiochip_irqchip_add(chip, &zynq_gpio_edge_irqchip, 0,
+ handle_level_irq, IRQ_TYPE_NONE);
if (ret) {
dev_err(&pdev->dev, "Failed to add irq chip\n");
goto err_rm_gpiochip;
}
- gpiochip_set_chained_irqchip(chip, &zynq_gpio_irqchip, irq,
+ gpiochip_set_chained_irqchip(chip, &zynq_gpio_edge_irqchip, irq,
zynq_gpio_irqhandler);
pm_runtime_set_active(&pdev->dev);
void of_gpiochip_remove(struct gpio_chip *chip)
{
gpiochip_remove_pin_ranges(chip);
-
- if (chip->of_node)
- of_node_put(chip->of_node);
+ of_node_put(chip->of_node);
}
return true;
}
+void intel_hpd_cancel_work(struct drm_i915_private *dev_priv)
+{
+ spin_lock_irq(&dev_priv->irq_lock);
+
+ dev_priv->long_hpd_port_mask = 0;
+ dev_priv->short_hpd_port_mask = 0;
+ dev_priv->hpd_event_bits = 0;
+
+ spin_unlock_irq(&dev_priv->irq_lock);
+
+ cancel_work_sync(&dev_priv->dig_port_work);
+ cancel_work_sync(&dev_priv->hotplug_work);
+ cancel_delayed_work_sync(&dev_priv->hotplug_reenable_work);
+}
+
+static void intel_suspend_encoders(struct drm_i915_private *dev_priv)
+{
+ struct drm_device *dev = dev_priv->dev;
+ struct drm_encoder *encoder;
+
+ drm_modeset_lock_all(dev);
+ list_for_each_entry(encoder, &dev->mode_config.encoder_list, head) {
+ struct intel_encoder *intel_encoder = to_intel_encoder(encoder);
+
+ if (intel_encoder->suspend)
+ intel_encoder->suspend(intel_encoder);
+ }
+ drm_modeset_unlock_all(dev);
+}
+
static int i915_drm_freeze(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
flush_delayed_work(&dev_priv->rps.delayed_resume_work);
intel_runtime_pm_disable_interrupts(dev);
+ intel_hpd_cancel_work(dev_priv);
+
+ intel_suspend_encoders(dev_priv);
intel_suspend_gt_powersave(dev);
} hpd_mark;
} hpd_stats[HPD_NUM_PINS];
u32 hpd_event_bits;
- struct timer_list hotplug_reenable_timer;
+ struct delayed_work hotplug_reenable_work;
struct i915_fbc fbc;
struct i915_drrs drrs;
extern unsigned long i915_gfx_val(struct drm_i915_private *dev_priv);
extern void i915_update_gfx_val(struct drm_i915_private *dev_priv);
int vlv_force_gfx_clock(struct drm_i915_private *dev_priv, bool on);
+void intel_hpd_cancel_work(struct drm_i915_private *dev_priv);
extern void intel_console_resume(struct work_struct *work);
* some connectors */
if (hpd_disabled) {
drm_kms_helper_poll_enable(dev);
- mod_timer(&dev_priv->hotplug_reenable_timer,
- jiffies + msecs_to_jiffies(I915_REENABLE_HOTPLUG_DELAY));
+ mod_delayed_work(system_wq, &dev_priv->hotplug_reenable_work,
+ msecs_to_jiffies(I915_REENABLE_HOTPLUG_DELAY));
}
spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
drm_kms_helper_hotplug_event(dev);
}
-static void intel_hpd_irq_uninstall(struct drm_i915_private *dev_priv)
-{
- del_timer_sync(&dev_priv->hotplug_reenable_timer);
-}
-
static void ironlake_rps_change_irq_handler(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
if (!dev_priv)
return;
- intel_hpd_irq_uninstall(dev_priv);
-
gen8_irq_reset(dev);
}
I915_WRITE(VLV_MASTER_IER, 0);
- intel_hpd_irq_uninstall(dev_priv);
-
for_each_pipe(pipe)
I915_WRITE(PIPESTAT(pipe), 0xffff);
if (!dev_priv)
return;
- intel_hpd_irq_uninstall(dev_priv);
-
ironlake_irq_reset(dev);
}
struct drm_i915_private *dev_priv = dev->dev_private;
int pipe;
- intel_hpd_irq_uninstall(dev_priv);
-
if (I915_HAS_HOTPLUG(dev)) {
I915_WRITE(PORT_HOTPLUG_EN, 0);
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
if (!dev_priv)
return;
- intel_hpd_irq_uninstall(dev_priv);
-
I915_WRITE(PORT_HOTPLUG_EN, 0);
I915_WRITE(PORT_HOTPLUG_STAT, I915_READ(PORT_HOTPLUG_STAT));
I915_WRITE(IIR, I915_READ(IIR));
}
-static void intel_hpd_irq_reenable(unsigned long data)
+static void intel_hpd_irq_reenable(struct work_struct *work)
{
- struct drm_i915_private *dev_priv = (struct drm_i915_private *)data;
+ struct drm_i915_private *dev_priv =
+ container_of(work, typeof(*dev_priv),
+ hotplug_reenable_work.work);
struct drm_device *dev = dev_priv->dev;
struct drm_mode_config *mode_config = &dev->mode_config;
unsigned long irqflags;
int i;
+ intel_runtime_pm_get(dev_priv);
+
spin_lock_irqsave(&dev_priv->irq_lock, irqflags);
for (i = (HPD_NONE + 1); i < HPD_NUM_PINS; i++) {
struct drm_connector *connector;
if (dev_priv->display.hpd_irq_setup)
dev_priv->display.hpd_irq_setup(dev);
spin_unlock_irqrestore(&dev_priv->irq_lock, irqflags);
+
+ intel_runtime_pm_put(dev_priv);
}
void intel_irq_init(struct drm_device *dev)
setup_timer(&dev_priv->gpu_error.hangcheck_timer,
i915_hangcheck_elapsed,
(unsigned long) dev);
- setup_timer(&dev_priv->hotplug_reenable_timer, intel_hpd_irq_reenable,
- (unsigned long) dev_priv);
+ INIT_DELAYED_WORK(&dev_priv->hotplug_reenable_work,
+ intel_hpd_irq_reenable);
pm_qos_add_request(&dev_priv->pm_qos, PM_QOS_CPU_DMA_LATENCY, PM_QOS_DEFAULT_VALUE);
goto out;
}
+ drm_modeset_acquire_init(&ctx, 0);
+
/* for pre-945g platforms use load detect */
if (intel_get_load_detect_pipe(connector, NULL, &tmp, &ctx)) {
if (intel_crt_detect_ddc(connector))
status = connector_status_connected;
else
status = intel_crt_load_detect(crt);
- intel_release_load_detect_pipe(connector, &tmp, &ctx);
+ intel_release_load_detect_pipe(connector, &tmp);
} else
status = connector_status_unknown;
+ drm_modeset_drop_locks(&ctx);
+ drm_modeset_acquire_fini(&ctx);
+
out:
intel_display_power_put(dev_priv, power_domain);
return status;
connector->base.id, connector->name,
encoder->base.id, encoder->name);
- drm_modeset_acquire_init(ctx, 0);
-
retry:
ret = drm_modeset_lock(&config->connection_mutex, ctx);
if (ret)
i++;
if (!(encoder->possible_crtcs & (1 << i)))
continue;
- if (!possible_crtc->enabled) {
- crtc = possible_crtc;
- break;
- }
+ if (possible_crtc->enabled)
+ continue;
+ /* This can occur when applying the pipe A quirk on resume. */
+ if (to_intel_crtc(possible_crtc)->new_enabled)
+ continue;
+
+ crtc = possible_crtc;
+ break;
}
/*
goto retry;
}
- drm_modeset_drop_locks(ctx);
- drm_modeset_acquire_fini(ctx);
-
return false;
}
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old,
- struct drm_modeset_acquire_ctx *ctx)
+ struct intel_load_detect_pipe *old)
{
struct intel_encoder *intel_encoder =
intel_attached_encoder(connector);
drm_framebuffer_unreference(old->release_fb);
}
- goto unlock;
return;
}
/* Switch crtc and encoder back off if necessary */
if (old->dpms_mode != DRM_MODE_DPMS_ON)
connector->funcs->dpms(connector, old->dpms_mode);
-
-unlock:
- drm_modeset_drop_locks(ctx);
- drm_modeset_acquire_fini(ctx);
}
static int i9xx_pll_refclk(struct drm_device *dev,
};
const struct drm_rect clip = {
/* integer pixels */
- .x2 = intel_crtc->config.pipe_src_w,
- .y2 = intel_crtc->config.pipe_src_h,
+ .x2 = intel_crtc->active ? intel_crtc->config.pipe_src_w : 0,
+ .y2 = intel_crtc->active ? intel_crtc->config.pipe_src_h : 0,
};
bool visible;
int ret;
struct intel_connector *connector;
struct drm_connector *crt = NULL;
struct intel_load_detect_pipe load_detect_temp;
- struct drm_modeset_acquire_ctx ctx;
+ struct drm_modeset_acquire_ctx *ctx = dev->mode_config.acquire_ctx;
/* We can't just switch on the pipe A, we need to set things up with a
* proper mode and output configuration. As a gross hack, enable pipe A
if (!crt)
return;
- if (intel_get_load_detect_pipe(crt, NULL, &load_detect_temp, &ctx))
- intel_release_load_detect_pipe(crt, &load_detect_temp, &ctx);
-
-
+ if (intel_get_load_detect_pipe(crt, NULL, &load_detect_temp, ctx))
+ intel_release_load_detect_pipe(crt, &load_detect_temp);
}
static bool
* experience fancy races otherwise.
*/
drm_irq_uninstall(dev);
- cancel_work_sync(&dev_priv->hotplug_work);
+ intel_hpd_cancel_work(dev_priv);
dev_priv->pm._irqs_disabled = true;
/*
if (WARN_ON(!intel_encoder->base.crtc))
return;
+ if (!to_intel_crtc(intel_encoder->base.crtc)->active)
+ return;
+
/* Try to read receiver status if the link appears to be up */
if (!intel_dp_get_link_status(intel_dp, link_status)) {
return;
kfree(intel_dig_port);
}
+static void intel_dp_encoder_suspend(struct intel_encoder *intel_encoder)
+{
+ struct intel_dp *intel_dp = enc_to_intel_dp(&intel_encoder->base);
+
+ if (!is_edp(intel_dp))
+ return;
+
+ edp_panel_vdd_off_sync(intel_dp);
+}
+
static void intel_dp_encoder_reset(struct drm_encoder *encoder)
{
intel_edp_panel_vdd_sanitize(to_intel_encoder(encoder));
intel_dp_hpd_pulse(struct intel_digital_port *intel_dig_port, bool long_hpd)
{
struct intel_dp *intel_dp = &intel_dig_port->dp;
+ struct intel_encoder *intel_encoder = &intel_dig_port->base;
struct drm_device *dev = intel_dig_port->base.base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
- int ret;
+ enum intel_display_power_domain power_domain;
+ bool ret = true;
+
if (intel_dig_port->base.type != INTEL_OUTPUT_EDP)
intel_dig_port->base.type = INTEL_OUTPUT_DISPLAYPORT;
DRM_DEBUG_KMS("got hpd irq on port %d - %s\n", intel_dig_port->port,
long_hpd ? "long" : "short");
+ power_domain = intel_display_port_power_domain(intel_encoder);
+ intel_display_power_get(dev_priv, power_domain);
+
if (long_hpd) {
if (!ibx_digital_port_connected(dev_priv, intel_dig_port))
goto mst_fail;
} else {
if (intel_dp->is_mst) {
- ret = intel_dp_check_mst_status(intel_dp);
- if (ret == -EINVAL)
+ if (intel_dp_check_mst_status(intel_dp) == -EINVAL)
goto mst_fail;
}
drm_modeset_unlock(&dev->mode_config.connection_mutex);
}
}
- return false;
+ ret = false;
+ goto put_power;
mst_fail:
/* if we were in MST mode, and device is not there get out of MST mode */
if (intel_dp->is_mst) {
intel_dp->is_mst = false;
drm_dp_mst_topology_mgr_set_mst(&intel_dp->mst_mgr, intel_dp->is_mst);
}
- return true;
+put_power:
+ intel_display_power_put(dev_priv, power_domain);
+
+ return ret;
}
/* Return which DP Port should be selected for Transcoder DP control */
intel_encoder->disable = intel_disable_dp;
intel_encoder->get_hw_state = intel_dp_get_hw_state;
intel_encoder->get_config = intel_dp_get_config;
+ intel_encoder->suspend = intel_dp_encoder_suspend;
if (IS_CHERRYVIEW(dev)) {
intel_encoder->pre_pll_enable = chv_dp_pre_pll_enable;
intel_encoder->pre_enable = chv_pre_enable_dp;
* be set correctly before calling this function. */
void (*get_config)(struct intel_encoder *,
struct intel_crtc_config *pipe_config);
+ /*
+ * Called during system suspend after all pending requests for the
+ * encoder are flushed (for example for DP AUX transactions) and
+ * device interrupts are disabled.
+ */
+ void (*suspend)(struct intel_encoder *);
int crtc_mask;
enum hpd_pin hpd_pin;
};
struct intel_load_detect_pipe *old,
struct drm_modeset_acquire_ctx *ctx);
void intel_release_load_detect_pipe(struct drm_connector *connector,
- struct intel_load_detect_pipe *old,
- struct drm_modeset_acquire_ctx *ctx);
+ struct intel_load_detect_pipe *old);
int intel_pin_and_fence_fb_obj(struct drm_device *dev,
struct drm_i915_gem_object *obj,
struct intel_engine_cs *pipelined);
struct intel_load_detect_pipe tmp;
struct drm_modeset_acquire_ctx ctx;
+ drm_modeset_acquire_init(&ctx, 0);
+
if (intel_get_load_detect_pipe(connector, &mode, &tmp, &ctx)) {
type = intel_tv_detect_type(intel_tv, connector);
- intel_release_load_detect_pipe(connector, &tmp, &ctx);
+ intel_release_load_detect_pipe(connector, &tmp);
} else
return connector_status_unknown;
+
+ drm_modeset_drop_locks(&ctx);
+ drm_modeset_acquire_fini(&ctx);
} else
return connector->status;
evergreen.o evergreen_cs.o evergreen_blit_shaders.o \
evergreen_hdmi.o radeon_trace_points.o ni.o cayman_blit_shaders.o \
atombios_encoders.o radeon_semaphore.o radeon_sa.o atombios_i2c.o si.o \
- si_blit_shaders.o radeon_prime.o radeon_uvd.o cik.o cik_blit_shaders.o \
+ si_blit_shaders.o radeon_prime.o cik.o cik_blit_shaders.o \
r600_dpm.o rs780_dpm.o rv6xx_dpm.o rv770_dpm.o rv730_dpm.o rv740_dpm.o \
rv770_smc.o cypress_dpm.o btc_dpm.o sumo_dpm.o sumo_smc.o trinity_dpm.o \
trinity_smc.o ni_dpm.o si_smc.o si_dpm.o kv_smc.o kv_dpm.o ci_smc.o \
WREG32_SMC(CG_THERMAL_CTRL, tmp);
#endif
+ rdev->pm.dpm.thermal.min_temp = low_temp;
+ rdev->pm.dpm.thermal.max_temp = high_temp;
+
return 0;
}
u32 mc_shared_chmap, mc_arb_ramcfg;
u32 hdp_host_path_cntl;
u32 tmp;
- int i, j, k;
+ int i, j;
switch (rdev->family) {
case CHIP_BONAIRE:
(rdev->pdev->device == 0x130B) ||
(rdev->pdev->device == 0x130E) ||
(rdev->pdev->device == 0x1315) ||
+ (rdev->pdev->device == 0x1318) ||
(rdev->pdev->device == 0x131B)) {
rdev->config.cik.max_cu_per_sh = 4;
rdev->config.cik.max_backends_per_se = 1;
rdev->config.cik.max_sh_per_se,
rdev->config.cik.max_backends_per_se);
+ rdev->config.cik.active_cus = 0;
for (i = 0; i < rdev->config.cik.max_shader_engines; i++) {
for (j = 0; j < rdev->config.cik.max_sh_per_se; j++) {
- for (k = 0; k < rdev->config.cik.max_cu_per_sh; k++) {
- rdev->config.cik.active_cus +=
- hweight32(cik_get_cu_active_bitmap(rdev, i, j));
- }
+ rdev->config.cik.active_cus +=
+ hweight32(cik_get_cu_active_bitmap(rdev, i, j));
}
}
radeon_ring_write(ring, PACKET3(PACKET3_SET_UCONFIG_REG, 1));
radeon_ring_write(ring, ((scratch - PACKET3_SET_UCONFIG_REG_START) >> 2));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(scratch);
radeon_ring_write(ring, 0);
}
+/**
+ * cik_semaphore_ring_emit - emit a semaphore on the CP ring
+ *
+ * @rdev: radeon_device pointer
+ * @ring: radeon ring buffer object
+ * @semaphore: radeon semaphore object
+ * @emit_wait: Is this a sempahore wait?
+ *
+ * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
+ * from running ahead of semaphore waits.
+ */
bool cik_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
radeon_ring_write(ring, lower_32_bits(addr));
radeon_ring_write(ring, (upper_32_bits(addr) & 0xffff) | sel);
+ if (emit_wait && ring->idx == RADEON_RING_TYPE_GFX_INDEX) {
+ /* Prevent the PFP from running ahead of the semaphore wait */
+ radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+ radeon_ring_write(ring, 0x0);
+ }
+
return true;
}
return r;
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_semaphore_free(rdev, &sem, *fence);
return r;
ib.ptr[1] = ((scratch - PACKET3_SET_UCONFIG_REG_START) >> 2);
ib.ptr[2] = 0xDEADBEEF;
ib.length_dw = 3;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_scratch_free(rdev, scratch);
radeon_ib_free(rdev, &ib);
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* VGT_OUT_DEALLOC_CNTL */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
return 0;
}
/* update SH_MEM_* regs */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, SRBM_GFX_CNTL >> 2);
radeon_ring_write(ring, 0);
radeon_ring_write(ring, VMID(vm->id));
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 6));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, SH_MEM_BASES >> 2);
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0); /* SH_MEM_APE1_LIMIT */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, SRBM_GFX_CNTL >> 2);
radeon_ring_write(ring, 0);
/* bits 0-15 are the VM contexts0-15 */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(usepfp) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, VM_INVALIDATE_REQUEST >> 2);
radeon_ring_write(ring, 0);
return r;
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_semaphore_free(rdev, &sem, *fence);
return r;
radeon_ring_write(ring, upper_32_bits(rdev->vram_scratch.gpu_addr));
radeon_ring_write(ring, 1); /* number of DWs to follow */
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = readl(ptr);
ib.ptr[4] = 0xDEADBEEF;
ib.length_dw = 5;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
radeon_ring_write(ring, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
cp_me = 0xff;
WREG32(CP_ME_CNTL, cp_me);
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
return 0;
}
return r;
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_semaphore_free(rdev, &sem, *fence);
return r;
return kv_enable_uvd_dpm(rdev, !gate);
}
-static u8 kv_get_vce_boot_level(struct radeon_device *rdev)
+static u8 kv_get_vce_boot_level(struct radeon_device *rdev, u32 evclk)
{
u8 i;
struct radeon_vce_clock_voltage_dependency_table *table =
&rdev->pm.dpm.dyn_state.vce_clock_voltage_dependency_table;
for (i = 0; i < table->count; i++) {
- if (table->entries[i].evclk >= 0) /* XXX */
+ if (table->entries[i].evclk >= evclk)
break;
}
if (pi->caps_stable_p_state)
pi->vce_boot_level = table->count - 1;
else
- pi->vce_boot_level = kv_get_vce_boot_level(rdev);
+ pi->vce_boot_level = kv_get_vce_boot_level(rdev, radeon_new_state->evclk);
ret = kv_copy_bytes_to_smc(rdev,
pi->dpm_table_start +
pi->caps_sclk_ds = true;
pi->enable_auto_thermal_throttling = true;
pi->disable_nb_ps3_in_battery = false;
- pi->bapm_enable = true;
+ if (radeon_bapm == 0)
+ pi->bapm_enable = false;
+ else
+ pi->bapm_enable = true;
pi->voltage_drop_t = 0;
pi->caps_sclk_throttle_low_notification = false;
pi->caps_fps = false; /* true? */
radeon_ring_write(ring, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
cayman_cp_enable(rdev, true);
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
/* XXX init other rings */
if (fence) {
r = radeon_fence_emit(rdev, fence, RADEON_RING_TYPE_GFX_INDEX);
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
return r;
}
RADEON_ISYNC_ANY3D_IDLE2D |
RADEON_ISYNC_WAIT_IDLEGUI |
RADEON_ISYNC_CPSCRATCH_IDLEGUI);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
}
radeon_ring_write(ring, PACKET0(scratch, 0));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(scratch);
if (tmp == 0xDEADBEEF) {
ib.ptr[6] = PACKET2(0);
ib.ptr[7] = PACKET2(0);
ib.length_dw = 8;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
goto free_ib;
if (fence) {
r = radeon_fence_emit(rdev, fence, RADEON_RING_TYPE_GFX_INDEX);
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
return r;
}
radeon_ring_write(ring,
R300_GEOMETRY_ROUND_NEAREST |
R300_COLOR_ROUND_NEAREST);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
static void r300_errata(struct radeon_device *rdev)
radeon_ring_write(ring, PACKET0(R300_CP_RESYNC_ADDR, 1));
radeon_ring_write(ring, rdev->config.r300.resync_scratch);
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
static void r420_cp_errata_fini(struct radeon_device *rdev)
radeon_ring_lock(rdev, ring, 8);
radeon_ring_write(ring, PACKET0(R300_RB3D_DSTCACHE_CTLSTAT, 0));
radeon_ring_write(ring, R300_RB3D_DC_FINISH);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_scratch_free(rdev, rdev->config.r300.resync_scratch);
}
radeon_ring_write(ring, PACKET3_ME_INITIALIZE_DEVICE_ID(1));
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
cp_me = 0xff;
WREG32(R_0086D8_CP_ME_CNTL, cp_me);
radeon_ring_write(ring, PACKET3(PACKET3_SET_CONFIG_REG, 1));
radeon_ring_write(ring, ((scratch - PACKET3_SET_CONFIG_REG_OFFSET) >> 2));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(scratch);
if (tmp == 0xDEADBEEF)
}
}
+/**
+ * r600_semaphore_ring_emit - emit a semaphore on the CP ring
+ *
+ * @rdev: radeon_device pointer
+ * @ring: radeon ring buffer object
+ * @semaphore: radeon semaphore object
+ * @emit_wait: Is this a sempahore wait?
+ *
+ * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
+ * from running ahead of semaphore waits.
+ */
bool r600_semaphore_ring_emit(struct radeon_device *rdev,
struct radeon_ring *ring,
struct radeon_semaphore *semaphore,
radeon_ring_write(ring, lower_32_bits(addr));
radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
+ /* PFP_SYNC_ME packet only exists on 7xx+ */
+ if (emit_wait && (rdev->family >= CHIP_RV770)) {
+ /* Prevent the PFP from running ahead of the semaphore wait */
+ radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+ radeon_ring_write(ring, 0x0);
+ }
+
return true;
}
return r;
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_semaphore_free(rdev, &sem, *fence);
return r;
ib.ptr[1] = ((scratch - PACKET3_SET_CONFIG_REG_OFFSET) >> 2);
ib.ptr[2] = 0xDEADBEEF;
ib.length_dw = 3;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
goto free_ib;
radeon_ring_write(ring, rdev->vram_scratch.gpu_addr & 0xfffffffc);
radeon_ring_write(ring, upper_32_bits(rdev->vram_scratch.gpu_addr) & 0xff);
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = readl(ptr);
ib.ptr[3] = 0xDEADBEEF;
ib.length_dw = 4;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
return r;
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_semaphore_free(rdev, &sem, *fence);
return r;
*/
# define PACKET3_CP_DMA_CMD_SAIC (1 << 28)
# define PACKET3_CP_DMA_CMD_DAIC (1 << 29)
+#define PACKET3_PFP_SYNC_ME 0x42 /* r7xx+ only */
#define PACKET3_SURFACE_SYNC 0x43
# define PACKET3_CB0_DEST_BASE_ENA (1 << 6)
# define PACKET3_FULL_CACHE_ENA (1 << 20) /* r7xx+ only */
extern int radeon_vm_block_size;
extern int radeon_deep_color;
extern int radeon_use_pflipirq;
+extern int radeon_bapm;
/*
* Copy from radeon_drv.h so we don't have to include both and have conflicting
unsigned size);
void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib);
int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib,
- struct radeon_ib *const_ib);
+ struct radeon_ib *const_ib, bool hdp_flush);
int radeon_ib_pool_init(struct radeon_device *rdev);
void radeon_ib_pool_fini(struct radeon_device *rdev);
int radeon_ib_ring_tests(struct radeon_device *rdev);
void radeon_ring_free_size(struct radeon_device *rdev, struct radeon_ring *cp);
int radeon_ring_alloc(struct radeon_device *rdev, struct radeon_ring *cp, unsigned ndw);
int radeon_ring_lock(struct radeon_device *rdev, struct radeon_ring *cp, unsigned ndw);
-void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *cp);
-void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *cp);
+void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *cp,
+ bool hdp_flush);
+void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *cp,
+ bool hdp_flush);
void radeon_ring_undo(struct radeon_ring *ring);
void radeon_ring_unlock_undo(struct radeon_device *rdev, struct radeon_ring *cp);
int radeon_ring_test(struct radeon_device *rdev, struct radeon_ring *cp);
* the buffers used for read only, which doubles the range
* to 0 to 31. 32 is reserved for the kernel driver.
*/
- priority = (r->flags & 0xf) * 2 + !!r->write_domain;
+ priority = (r->flags & RADEON_RELOC_PRIO_MASK) * 2
+ + !!r->write_domain;
/* the first reloc of an UVD job is the msg and that must be in
VRAM, also but everything into VRAM on AGP cards to avoid
radeon_vce_note_usage(rdev);
radeon_cs_sync_rings(parser);
- r = radeon_ib_schedule(rdev, &parser->ib, NULL);
+ r = radeon_ib_schedule(rdev, &parser->ib, NULL, true);
if (r) {
DRM_ERROR("Failed to schedule IB !\n");
}
if ((rdev->family >= CHIP_TAHITI) &&
(parser->chunk_const_ib_idx != -1)) {
- r = radeon_ib_schedule(rdev, &parser->ib, &parser->const_ib);
+ r = radeon_ib_schedule(rdev, &parser->ib, &parser->const_ib, true);
} else {
- r = radeon_ib_schedule(rdev, &parser->ib, NULL);
+ r = radeon_ib_schedule(rdev, &parser->ib, NULL, true);
}
out:
radeon_save_bios_scratch_regs(rdev);
/* block TTM */
resched = ttm_bo_lock_delayed_workqueue(&rdev->mman.bdev);
- radeon_pm_suspend(rdev);
radeon_suspend(rdev);
+ radeon_hpd_fini(rdev);
for (i = 0; i < RADEON_NUM_RINGS; ++i) {
ring_sizes[i] = radeon_ring_backup(rdev, &rdev->ring[i],
}
}
- radeon_pm_resume(rdev);
+ if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled) {
+ /* do dpm late init */
+ r = radeon_pm_late_init(rdev);
+ if (r) {
+ rdev->pm.dpm_enabled = false;
+ DRM_ERROR("radeon_pm_late_init failed, disabling dpm\n");
+ }
+ } else {
+ /* resume old pm late */
+ radeon_pm_resume(rdev);
+ }
+
+ /* init dig PHYs, disp eng pll */
+ if (rdev->is_atom_bios) {
+ radeon_atom_encoder_init(rdev);
+ radeon_atom_disp_eng_pll_init(rdev);
+ /* turn on the BL */
+ if (rdev->mode_info.bl_encoder) {
+ u8 bl_level = radeon_get_backlight_level(rdev,
+ rdev->mode_info.bl_encoder);
+ radeon_set_backlight_level(rdev, rdev->mode_info.bl_encoder,
+ bl_level);
+ }
+ }
+ /* reset hpd state */
+ radeon_hpd_init(rdev);
+
drm_helper_resume_force_mode(rdev->ddev);
+ /* set the power state here in case we are a PX system or headless */
+ if ((rdev->pm.pm_method == PM_METHOD_DPM) && rdev->pm.dpm_enabled)
+ radeon_pm_compute_clocks(rdev);
+
ttm_bo_unlock_delayed_workqueue(&rdev->mman.bdev, resched);
if (r) {
/* bad news, how to tell it to userspace ? */
int radeon_vm_block_size = -1;
int radeon_deep_color = 0;
int radeon_use_pflipirq = 2;
+int radeon_bapm = -1;
MODULE_PARM_DESC(no_wb, "Disable AGP writeback for scratch registers");
module_param_named(no_wb, radeon_no_wb, int, 0444);
MODULE_PARM_DESC(use_pflipirq, "Pflip irqs for pageflip completion (0 = disable, 1 = as fallback, 2 = exclusive (default))");
module_param_named(use_pflipirq, radeon_use_pflipirq, int, 0444);
+MODULE_PARM_DESC(bapm, "BAPM support (1 = enable, 0 = disable, -1 = auto)");
+module_param_named(bapm, radeon_bapm, int, 0444);
+
static struct pci_device_id pciidlist[] = {
radeon_PCI_IDS
};
* @rdev: radeon_device pointer
* @ib: IB object to schedule
* @const_ib: Const IB to schedule (SI only)
+ * @hdp_flush: Whether or not to perform an HDP cache flush
*
* Schedule an IB on the associated ring (all asics).
* Returns 0 on success, error on failure.
* to SI there was just a DE IB.
*/
int radeon_ib_schedule(struct radeon_device *rdev, struct radeon_ib *ib,
- struct radeon_ib *const_ib)
+ struct radeon_ib *const_ib, bool hdp_flush)
{
struct radeon_ring *ring = &rdev->ring[ib->ring];
int r = 0;
if (ib->vm)
radeon_vm_fence(rdev, ib->vm, ib->fence);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, hdp_flush);
return 0;
}
struct radeon_device *rdev = ddev->dev_private;
enum radeon_pm_state_type pm = rdev->pm.dpm.user_state;
- if ((rdev->flags & RADEON_IS_PX) &&
- (ddev->switch_power_state != DRM_SWITCH_POWER_ON))
- return snprintf(buf, PAGE_SIZE, "off\n");
-
return snprintf(buf, PAGE_SIZE, "%s\n",
(pm == POWER_STATE_TYPE_BATTERY) ? "battery" :
(pm == POWER_STATE_TYPE_BALANCED) ? "balanced" : "performance");
struct drm_device *ddev = dev_get_drvdata(dev);
struct radeon_device *rdev = ddev->dev_private;
- /* Can't set dpm state when the card is off */
- if ((rdev->flags & RADEON_IS_PX) &&
- (ddev->switch_power_state != DRM_SWITCH_POWER_ON))
- return -EINVAL;
-
mutex_lock(&rdev->pm.mutex);
if (strncmp("battery", buf, strlen("battery")) == 0)
rdev->pm.dpm.user_state = POWER_STATE_TYPE_BATTERY;
goto fail;
}
mutex_unlock(&rdev->pm.mutex);
- radeon_pm_compute_clocks(rdev);
+
+ /* Can't set dpm state when the card is off */
+ if (!(rdev->flags & RADEON_IS_PX) ||
+ (ddev->switch_power_state == DRM_SWITCH_POWER_ON))
+ radeon_pm_compute_clocks(rdev);
+
fail:
return count;
}
*
* @rdev: radeon_device pointer
* @ring: radeon_ring structure holding ring information
+ * @hdp_flush: Whether or not to perform an HDP cache flush
*
* Update the wptr (write pointer) to tell the GPU to
* execute new commands on the ring buffer (all asics).
*/
-void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *ring)
+void radeon_ring_commit(struct radeon_device *rdev, struct radeon_ring *ring,
+ bool hdp_flush)
{
/* If we are emitting the HDP flush via the ring buffer, we need to
* do it before padding.
*/
- if (rdev->asic->ring[ring->idx]->hdp_flush)
+ if (hdp_flush && rdev->asic->ring[ring->idx]->hdp_flush)
rdev->asic->ring[ring->idx]->hdp_flush(rdev, ring);
/* We pad to match fetch size */
while (ring->wptr & ring->align_mask) {
/* If we are emitting the HDP flush via MMIO, we need to do it after
* all CPU writes to VRAM finished.
*/
- if (rdev->asic->mmio_hdp_flush)
+ if (hdp_flush && rdev->asic->mmio_hdp_flush)
rdev->asic->mmio_hdp_flush(rdev);
radeon_ring_set_wptr(rdev, ring);
}
*
* @rdev: radeon_device pointer
* @ring: radeon_ring structure holding ring information
+ * @hdp_flush: Whether or not to perform an HDP cache flush
*
* Call radeon_ring_commit() then unlock the ring (all asics).
*/
-void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *ring)
+void radeon_ring_unlock_commit(struct radeon_device *rdev, struct radeon_ring *ring,
+ bool hdp_flush)
{
- radeon_ring_commit(rdev, ring);
+ radeon_ring_commit(rdev, ring, hdp_flush);
mutex_unlock(&rdev->ring_lock);
}
radeon_ring_write(ring, data[i]);
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
kfree(data);
return 0;
}
/* Allocate ring buffer */
if (ring->ring_obj == NULL) {
r = radeon_bo_create(rdev, ring->ring_size, PAGE_SIZE, true,
- RADEON_GEM_DOMAIN_GTT,
- (rdev->flags & RADEON_IS_PCIE) ?
- RADEON_GEM_GTT_WC : 0,
+ RADEON_GEM_DOMAIN_GTT, 0,
NULL, &ring->ring_obj);
if (r) {
dev_err(rdev->dev, "(%d) ring create failed\n", r);
continue;
}
- radeon_ring_commit(rdev, &rdev->ring[i]);
+ radeon_ring_commit(rdev, &rdev->ring[i], false);
radeon_fence_note_sync(fence, ring);
semaphore->gpu_addr += 8;
return r;
}
radeon_fence_emit(rdev, fence, ring->idx);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
return 0;
}
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringA->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringA);
+ radeon_ring_unlock_commit(rdev, ringA, false);
r = radeon_test_create_and_emit_fence(rdev, ringA, &fence1);
if (r)
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringA->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringA);
+ radeon_ring_unlock_commit(rdev, ringA, false);
r = radeon_test_create_and_emit_fence(rdev, ringA, &fence2);
if (r)
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringB->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringB);
+ radeon_ring_unlock_commit(rdev, ringB, false);
r = radeon_fence_wait(fence1, false);
if (r) {
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringB->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringB);
+ radeon_ring_unlock_commit(rdev, ringB, false);
r = radeon_fence_wait(fence2, false);
if (r) {
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringA->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringA);
+ radeon_ring_unlock_commit(rdev, ringA, false);
r = radeon_test_create_and_emit_fence(rdev, ringA, &fenceA);
if (r)
goto out_cleanup;
}
radeon_semaphore_emit_wait(rdev, ringB->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringB);
+ radeon_ring_unlock_commit(rdev, ringB, false);
r = radeon_test_create_and_emit_fence(rdev, ringB, &fenceB);
if (r)
goto out_cleanup;
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringC->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringC);
+ radeon_ring_unlock_commit(rdev, ringC, false);
for (i = 0; i < 30; ++i) {
mdelay(100);
goto out_cleanup;
}
radeon_semaphore_emit_signal(rdev, ringC->idx, semaphore);
- radeon_ring_unlock_commit(rdev, ringC);
+ radeon_ring_unlock_commit(rdev, ringC, false);
mdelay(1000);
ib.ptr[i] = PACKET2(0);
ib.length_dw = 16;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r)
goto err;
ttm_eu_fence_buffer_objects(&ticket, &head, ib.fence);
for (i = ib.length_dw; i < ib_size_dw; ++i)
ib.ptr[i] = 0x0;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
}
for (i = ib.length_dw; i < ib_size_dw; ++i)
ib.ptr[i] = 0x0;
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
DRM_ERROR("radeon: failed to schedule ib (%d).\n", r);
}
return r;
}
radeon_ring_write(ring, VCE_CMD_END);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
if (vce_v1_0_get_rptr(rdev, ring) != rptr)
radeon_asic_vm_pad_ib(rdev, &ib);
WARN_ON(ib.length_dw > 64);
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r)
goto error;
/* add a clone of the bo_va to clear the old address */
struct radeon_bo_va *tmp;
tmp = kzalloc(sizeof(struct radeon_bo_va), GFP_KERNEL);
+ if (!tmp) {
+ mutex_unlock(&vm->mutex);
+ return -ENOMEM;
+ }
tmp->it.start = bo_va->it.start;
tmp->it.last = bo_va->it.last;
tmp->vm = vm;
radeon_semaphore_sync_to(ib.semaphore, pd->tbo.sync_obj);
radeon_semaphore_sync_to(ib.semaphore, vm->last_id_use);
WARN_ON(ib.length_dw > ndw);
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
return r;
WARN_ON(ib.length_dw > ndw);
radeon_semaphore_sync_to(ib.semaphore, vm->fence);
- r = radeon_ib_schedule(rdev, &ib, NULL);
+ r = radeon_ib_schedule(rdev, &ib, NULL, false);
if (r) {
radeon_ib_free(rdev, &ib);
return r;
radeon_ring_write(ring, GEOMETRY_ROUND_NEAREST | COLOR_ROUND_NEAREST);
radeon_ring_write(ring, PACKET0(0x20C8, 0));
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
int rv515_mc_wait_for_idle(struct radeon_device *rdev)
return r;
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_semaphore_free(rdev, &sem, *fence);
return r;
u32 sx_debug_1;
u32 hdp_host_path_cntl;
u32 tmp;
- int i, j, k;
+ int i, j;
switch (rdev->family) {
case CHIP_TAHITI:
rdev->config.si.max_sh_per_se,
rdev->config.si.max_cu_per_sh);
+ rdev->config.si.active_cus = 0;
for (i = 0; i < rdev->config.si.max_shader_engines; i++) {
for (j = 0; j < rdev->config.si.max_sh_per_se; j++) {
- for (k = 0; k < rdev->config.si.max_cu_per_sh; k++) {
- rdev->config.si.active_cus +=
- hweight32(si_get_cu_active_bitmap(rdev, i, j));
- }
+ rdev->config.si.active_cus +=
+ hweight32(si_get_cu_active_bitmap(rdev, i, j));
}
}
radeon_ring_write(ring, PACKET3_BASE_INDEX(CE_PARTITION_BASE));
radeon_ring_write(ring, 0xc000);
radeon_ring_write(ring, 0xe000);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
si_cp_enable(rdev, true);
radeon_ring_write(ring, 0x0000000e); /* VGT_VERTEX_REUSE_BLOCK_CNTL */
radeon_ring_write(ring, 0x00000010); /* VGT_OUT_DEALLOC_CNTL */
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = RADEON_RING_TYPE_GFX_INDEX; i <= CAYMAN_RING_TYPE_CP2_INDEX; ++i) {
ring = &rdev->ring[i];
radeon_ring_write(ring, PACKET3_COMPUTE(PACKET3_CLEAR_STATE, 0));
radeon_ring_write(ring, 0);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
}
return 0;
/* flush hdp cache */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, HDP_MEM_COHERENCY_FLUSH_CNTL >> 2);
radeon_ring_write(ring, 0);
/* bits 0-15 are the VM contexts0-15 */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
- radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(0) |
+ radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
WRITE_DATA_DST_SEL(0)));
radeon_ring_write(ring, VM_INVALIDATE_REQUEST >> 2);
radeon_ring_write(ring, 0);
return r;
}
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
radeon_semaphore_free(rdev, &sem, *fence);
return r;
for (i = 0; i < SUMO_MAX_HARDWARE_POWERLEVELS; i++)
pi->at[i] = TRINITY_AT_DFLT;
- /* There are stability issues reported on with
- * bapm enabled when switching between AC and battery
- * power. At the same time, some MSI boards hang
- * if it's not enabled and dpm is enabled. Just enable
- * it for MSI boards right now.
- */
- if (rdev->pdev->subsystem_vendor == 0x1462)
- pi->enable_bapm = true;
- else
+ if (radeon_bapm == -1) {
+ /* There are stability issues reported on with
+ * bapm enabled when switching between AC and battery
+ * power. At the same time, some MSI boards hang
+ * if it's not enabled and dpm is enabled. Just enable
+ * it for MSI boards right now.
+ */
+ if (rdev->pdev->subsystem_vendor == 0x1462)
+ pi->enable_bapm = true;
+ else
+ pi->enable_bapm = false;
+ } else if (radeon_bapm == 0) {
pi->enable_bapm = false;
+ } else {
+ pi->enable_bapm = true;
+ }
pi->enable_nbps_policy = true;
pi->enable_sclk_ds = true;
pi->enable_gfx_power_gating = true;
radeon_ring_write(ring, PACKET0(UVD_SEMA_CNTL, 0));
radeon_ring_write(ring, 3);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
done:
/* lower clocks again */
}
radeon_ring_write(ring, PACKET0(UVD_CONTEXT_ID, 0));
radeon_ring_write(ring, 0xDEADBEEF);
- radeon_ring_unlock_commit(rdev, ring);
+ radeon_ring_unlock_commit(rdev, ring, false);
for (i = 0; i < rdev->usec_timeout; i++) {
tmp = RREG32(UVD_CONTEXT_ID);
if (tmp == 0xDEADBEEF)
if ((connect_mask & HID_CONNECT_HIDRAW) && !hidraw_connect(hdev))
hdev->claimed |= HID_CLAIMED_HIDRAW;
+ if (connect_mask & HID_CONNECT_DRIVER)
+ hdev->claimed |= HID_CLAIMED_DRIVER;
+
/* Drivers with the ->raw_event callback set are not required to connect
* to any other listener. */
if (!hdev->claimed && !hdev->driver->raw_event) {
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_73F7 0x73f7
#define USB_DEVICE_ID_DWAV_EGALAX_MULTITOUCH_A001 0xa001
+#define USB_VENDOR_ID_ELAN 0x04f3
+#define USB_DEVICE_ID_ELAN_TOUCHSCREEN 0x0089
+
#define USB_VENDOR_ID_ELECOM 0x056e
#define USB_DEVICE_ID_ELECOM_BM084 0x0061
#define USB_DEVICE_ID_PI_ENGINEERING_VEC_USB_FOOTPEDAL 0xff
#define USB_VENDOR_ID_PIXART 0x093a
+#define USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2 0x0137
+#define USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE 0x2510
#define USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN 0x8001
#define USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1 0x8002
#define USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2 0x8003
djdev = djrcv_dev->paired_dj_devices[dj_report->device_index];
- if (!djdev) {
- dbg_hid("djrcv_dev->paired_dj_devices[dj_report->device_index]"
- " is NULL, index %d\n", dj_report->device_index);
- kfifo_in(&djrcv_dev->notif_fifo, dj_report, sizeof(struct dj_report));
-
- if (schedule_work(&djrcv_dev->work) == 0) {
- dbg_hid("%s: did not schedule the work item, was already "
- "queued\n", __func__);
- }
- return;
- }
-
memset(reportbuffer, 0, sizeof(reportbuffer));
for (i = 0; i < NUMBER_OF_HID_REPORTS; i++) {
dj_device = djrcv_dev->paired_dj_devices[dj_report->device_index];
- if (dj_device == NULL) {
- dbg_hid("djrcv_dev->paired_dj_devices[dj_report->device_index]"
- " is NULL, index %d\n", dj_report->device_index);
- kfifo_in(&djrcv_dev->notif_fifo, dj_report, sizeof(struct dj_report));
-
- if (schedule_work(&djrcv_dev->work) == 0) {
- dbg_hid("%s: did not schedule the work item, was already "
- "queued\n", __func__);
- }
- return;
- }
-
if ((dj_report->report_type > ARRAY_SIZE(hid_reportid_size_map) - 1) ||
(hid_reportid_size_map[dj_report->report_type] == 0)) {
dbg_hid("invalid report type:%x\n", dj_report->report_type);
struct dj_receiver_dev *djrcv_dev = hid_get_drvdata(hdev);
struct dj_report *dj_report = (struct dj_report *) data;
unsigned long flags;
- bool report_processed = false;
dbg_hid("%s, size:%d\n", __func__, size);
* device (via hid_input_report() ) and return 1 so hid-core does not do
* anything else with it.
*/
+
+ /* case 1) */
+ if (data[0] != REPORT_ID_DJ_SHORT)
+ return false;
+
if ((dj_report->device_index < DJ_DEVICE_INDEX_MIN) ||
(dj_report->device_index > DJ_DEVICE_INDEX_MAX)) {
- dev_err(&hdev->dev, "%s: invalid device index:%d\n",
+ /*
+ * Device index is wrong, bail out.
+ * This driver can ignore safely the receiver notifications,
+ * so ignore those reports too.
+ */
+ if (dj_report->device_index != DJ_RECEIVER_INDEX)
+ dev_err(&hdev->dev, "%s: invalid device index:%d\n",
__func__, dj_report->device_index);
return false;
}
spin_lock_irqsave(&djrcv_dev->lock, flags);
- if (dj_report->report_id == REPORT_ID_DJ_SHORT) {
- switch (dj_report->report_type) {
- case REPORT_TYPE_NOTIF_DEVICE_PAIRED:
- case REPORT_TYPE_NOTIF_DEVICE_UNPAIRED:
- logi_dj_recv_queue_notification(djrcv_dev, dj_report);
- break;
- case REPORT_TYPE_NOTIF_CONNECTION_STATUS:
- if (dj_report->report_params[CONNECTION_STATUS_PARAM_STATUS] ==
- STATUS_LINKLOSS) {
- logi_dj_recv_forward_null_report(djrcv_dev, dj_report);
- }
- break;
- default:
- logi_dj_recv_forward_report(djrcv_dev, dj_report);
+
+ if (!djrcv_dev->paired_dj_devices[dj_report->device_index]) {
+ /* received an event for an unknown device, bail out */
+ logi_dj_recv_queue_notification(djrcv_dev, dj_report);
+ goto out;
+ }
+
+ switch (dj_report->report_type) {
+ case REPORT_TYPE_NOTIF_DEVICE_PAIRED:
+ /* pairing notifications are handled above the switch */
+ break;
+ case REPORT_TYPE_NOTIF_DEVICE_UNPAIRED:
+ logi_dj_recv_queue_notification(djrcv_dev, dj_report);
+ break;
+ case REPORT_TYPE_NOTIF_CONNECTION_STATUS:
+ if (dj_report->report_params[CONNECTION_STATUS_PARAM_STATUS] ==
+ STATUS_LINKLOSS) {
+ logi_dj_recv_forward_null_report(djrcv_dev, dj_report);
}
- report_processed = true;
+ break;
+ default:
+ logi_dj_recv_forward_report(djrcv_dev, dj_report);
}
+
+out:
spin_unlock_irqrestore(&djrcv_dev->lock, flags);
- return report_processed;
+ return true;
}
static int logi_dj_probe(struct hid_device *hdev,
#define DJ_MAX_PAIRED_DEVICES 6
#define DJ_MAX_NUMBER_NOTIFICATIONS 8
+#define DJ_RECEIVER_INDEX 0
#define DJ_DEVICE_INDEX_MIN 1
#define DJ_DEVICE_INDEX_MAX 6
if (size < 4 || ((size - 4) % 9) != 0)
return 0;
npoints = (size - 4) / 9;
+ if (npoints > 15) {
+ hid_warn(hdev, "invalid size value (%d) for TRACKPAD_REPORT_ID\n",
+ size);
+ return 0;
+ }
msc->ntouches = 0;
for (ii = 0; ii < npoints; ii++)
magicmouse_emit_touch(msc, ii, data + ii * 9 + 4);
if (size < 6 || ((size - 6) % 8) != 0)
return 0;
npoints = (size - 6) / 8;
+ if (npoints > 15) {
+ hid_warn(hdev, "invalid size value (%d) for MOUSE_REPORT_ID\n",
+ size);
+ return 0;
+ }
msc->ntouches = 0;
for (ii = 0; ii < npoints; ii++)
magicmouse_emit_touch(msc, ii, data + ii * 8 + 6);
if (!data)
return 1;
+ if (size > 64) {
+ hid_warn(hdev, "invalid size value (%d) for picolcd raw event (%d)\n",
+ size, report->id);
+ return 0;
+ }
+
if (report->id == REPORT_KEY_STATE) {
if (data->input_keys)
ret = picolcd_raw_keypad(data, report, raw_data+1, size-1);
int offset;
int i;
- if (size < hdata->f11.report_size)
- return 0;
-
- if (!(irq & hdata->f11.irq_mask))
+ if (!(irq & hdata->f11.irq_mask) || size <= 0)
return 0;
offset = (hdata->max_fingers >> 2) + 1;
int fs_bit_position = (i & 0x3) << 1;
int finger_state = (data[fs_byte_position] >> fs_bit_position) &
0x03;
+ int position = offset + 5 * i;
+
+ if (position + 5 > size) {
+ /* partial report, go on with what we received */
+ printk_once(KERN_WARNING
+ "%s %s: Detected incomplete finger report. Finger reports may occasionally get dropped on this platform.\n",
+ dev_driver_string(&hdev->dev),
+ dev_name(&hdev->dev));
+ hid_dbg(hdev, "Incomplete finger report\n");
+ break;
+ }
- rmi_f11_process_touch(hdata, i, finger_state,
- &data[offset + 5 * i]);
+ rmi_f11_process_touch(hdata, i, finger_state, &data[position]);
}
input_mt_sync_frame(hdata->input);
input_sync(hdata->input);
if (!(irq & hdata->f30.irq_mask))
return 0;
+ if (size < (int)hdata->f30.report_size) {
+ hid_warn(hdev, "Click Button pressed, but the click data is missing\n");
+ return 0;
+ }
+
for (i = 0; i < hdata->gpio_led_count; i++) {
if (test_bit(i, &hdata->button_mask)) {
value = (data[i / 8] >> (i & 0x07)) & BIT(0);
return 1;
}
+static int rmi_check_sanity(struct hid_device *hdev, u8 *data, int size)
+{
+ int valid_size = size;
+ /*
+ * On the Dell XPS 13 9333, the bus sometimes get confused and fills
+ * the report with a sentinel value "ff". Synaptics told us that such
+ * behavior does not comes from the touchpad itself, so we filter out
+ * such reports here.
+ */
+
+ while ((data[valid_size - 1] == 0xff) && valid_size > 0)
+ valid_size--;
+
+ return valid_size;
+}
+
static int rmi_raw_event(struct hid_device *hdev,
struct hid_report *report, u8 *data, int size)
{
+ size = rmi_check_sanity(hdev, data, size);
+ if (size < 2)
+ return 0;
+
switch (data[0]) {
case RMI_READ_DATA_REPORT_ID:
return rmi_read_data_event(hdev, data, size);
USB_DEVICE_ID_MS_TYPE_COVER_2),
.driver_data = HID_SENSOR_HUB_ENUM_QUIRK},
{ HID_DEVICE(HID_BUS_ANY, HID_GROUP_SENSOR_HUB, USB_VENDOR_ID_STM_0,
+ USB_DEVICE_ID_STM_HID_SENSOR),
+ .driver_data = HID_SENSOR_HUB_ENUM_QUIRK},
+ { HID_DEVICE(HID_BUS_ANY, HID_GROUP_SENSOR_HUB, USB_VENDOR_ID_STM_0,
USB_DEVICE_ID_STM_HID_SENSOR_1),
.driver_data = HID_SENSOR_HUB_ENUM_QUIRK},
{ HID_DEVICE(HID_BUS_ANY, HID_GROUP_SENSOR_HUB, USB_VENDOR_ID_TEXAS_INSTRUMENTS,
/*
- * HID driver for Sony / PS2 / PS3 BD devices.
+ * HID driver for Sony / PS2 / PS3 / PS4 BD devices.
*
* Copyright (c) 1999 Andreas Gal
* Copyright (c) 2000-2005 Vojtech Pavlik <vojtech@suse.cz>
* Copyright (c) 2012 David Dillow <dave@thedillows.org>
* Copyright (c) 2006-2013 Jiri Kosina
* Copyright (c) 2013 Colin Leitner <colin.leitner@gmail.com>
+ * Copyright (c) 2014 Frank Praznik <frank.praznik@gmail.com>
*/
/*
0x75, 0x06, /* Report Size (6), */
0x95, 0x01, /* Report Count (1), */
0x15, 0x00, /* Logical Minimum (0), */
- 0x25, 0x7F, /* Logical Maximum (127), */
+ 0x25, 0x3F, /* Logical Maximum (63), */
0x81, 0x02, /* Input (Variable), */
0x05, 0x01, /* Usage Page (Desktop), */
0x09, 0x33, /* Usage (Rx), */
0x81, 0x02, /* Input (Variable), */
0x19, 0x43, /* Usage Minimum (43h), */
0x29, 0x45, /* Usage Maximum (45h), */
- 0x16, 0xFF, 0xBF, /* Logical Minimum (-16385), */
- 0x26, 0x00, 0x40, /* Logical Maximum (16384), */
+ 0x16, 0x00, 0xE0, /* Logical Minimum (-8192), */
+ 0x26, 0xFF, 0x1F, /* Logical Maximum (8191), */
0x95, 0x03, /* Report Count (3), */
0x81, 0x02, /* Input (Variable), */
0x06, 0x00, 0xFF, /* Usage Page (FF00h), */
0x09, 0x21, /* Usage (21h), */
0x15, 0x00, /* Logical Minimum (0), */
- 0x25, 0xFF, /* Logical Maximum (255), */
+ 0x26, 0xFF, 0x00, /* Logical Maximum (255), */
0x75, 0x08, /* Report Size (8), */
0x95, 0x27, /* Report Count (39), */
0x81, 0x02, /* Input (Variable), */
/*
* The default behavior of the Dualshock 4 is to send reports using report
- * type 1 when running over Bluetooth. However, as soon as it receives a
- * report of type 17 to set the LEDs or rumble it starts returning it's state
- * in report 17 instead of 1. Since report 17 is undefined in the default HID
+ * type 1 when running over Bluetooth. However, when feature report 2 is
+ * requested during the controller initialization it starts sending input
+ * reports in report 17. Since report 17 is undefined in the default HID
* descriptor the button and axis definitions must be moved to report 17 or
- * the HID layer won't process the received input once a report is sent.
+ * the HID layer won't process the received input.
*/
static u8 dualshock4_bt_rdesc[] = {
0x05, 0x01, /* Usage Page (Desktop), */
0x81, 0x02, /* Input (Variable), */
0x19, 0x43, /* Usage Minimum (43h), */
0x29, 0x45, /* Usage Maximum (45h), */
- 0x16, 0xFF, 0xBF, /* Logical Minimum (-16385), */
- 0x26, 0x00, 0x40, /* Logical Maximum (16384), */
+ 0x16, 0x00, 0xE0, /* Logical Minimum (-8192), */
+ 0x26, 0xFF, 0x1F, /* Logical Maximum (8191), */
0x95, 0x03, /* Report Count (3), */
0x81, 0x02, /* Input (Variable), */
0x06, 0x00, 0xFF, /* Usage Page (FF00h), */
if (rd[30] >= 0xee) {
battery_capacity = 100;
battery_charging = !(rd[30] & 0x01);
+ cable_state = 1;
} else {
__u8 index = rd[30] <= 5 ? rd[30] : 5;
battery_capacity = sixaxis_battery_capacity[index];
battery_charging = 0;
+ cable_state = 0;
}
- cable_state = !(rd[31] & 0x04);
spin_lock_irqsave(&sc->lock, flags);
sc->cable_state = cable_state;
return 0;
}
+static int sony_register_touchpad(struct hid_input *hi, int touch_count,
+ int w, int h)
+{
+ struct input_dev *input_dev = hi->input;
+ int ret;
+
+ ret = input_mt_init_slots(input_dev, touch_count, 0);
+ if (ret < 0)
+ return ret;
+
+ input_set_abs_params(input_dev, ABS_MT_POSITION_X, 0, w, 0, 0);
+ input_set_abs_params(input_dev, ABS_MT_POSITION_Y, 0, h, 0, 0);
+
+ return 0;
+}
+
+static void sony_input_configured(struct hid_device *hdev,
+ struct hid_input *hidinput)
+{
+ struct sony_sc *sc = hid_get_drvdata(hdev);
+
+ /*
+ * The Dualshock 4 touchpad supports 2 touches and has a
+ * resolution of 1920x942 (44.86 dots/mm).
+ */
+ if (sc->quirks & DUALSHOCK4_CONTROLLER) {
+ if (sony_register_touchpad(hidinput, 2, 1920, 942) != 0)
+ hid_err(sc->hdev,
+ "Unable to initialize multi-touch slots\n");
+ }
+}
+
/*
* Sending HID_REQ_GET_REPORT changes the operation mode of the ps3 controller
* to "operational". Without this, the ps3 controller will not report any
sc->battery.name = NULL;
}
-static int sony_register_touchpad(struct sony_sc *sc, int touch_count,
- int w, int h)
-{
- struct hid_input *hidinput = list_entry(sc->hdev->inputs.next,
- struct hid_input, list);
- struct input_dev *input_dev = hidinput->input;
- int ret;
-
- ret = input_mt_init_slots(input_dev, touch_count, 0);
- if (ret < 0) {
- hid_err(sc->hdev, "Unable to initialize multi-touch slots\n");
- return ret;
- }
-
- input_set_abs_params(input_dev, ABS_MT_POSITION_X, 0, w, 0, 0);
- input_set_abs_params(input_dev, ABS_MT_POSITION_Y, 0, h, 0, 0);
-
- return 0;
-}
-
/*
* If a controller is plugged in via USB while already connected via Bluetooth
* it will show up as two devices. A global list of connected controllers and
goto err_stop;
}
}
- /*
- * The Dualshock 4 touchpad supports 2 touches and has a
- * resolution of 1920x940.
- */
- ret = sony_register_touchpad(sc, 2, 1920, 940);
- if (ret < 0)
- goto err_stop;
sony_init_work(sc, dualshock4_state_worker);
} else {
MODULE_DEVICE_TABLE(hid, sony_devices);
static struct hid_driver sony_driver = {
- .name = "sony",
- .id_table = sony_devices,
- .input_mapping = sony_mapping,
- .probe = sony_probe,
- .remove = sony_remove,
- .report_fixup = sony_report_fixup,
- .raw_event = sony_raw_event
+ .name = "sony",
+ .id_table = sony_devices,
+ .input_mapping = sony_mapping,
+ .input_configured = sony_input_configured,
+ .probe = sony_probe,
+ .remove = sony_remove,
+ .report_fixup = sony_report_fixup,
+ .raw_event = sony_raw_event
};
static int __init sony_init(void)
if (!tdev->fwinfo) {
hid_err(hdev, "unsupported firmware %c\n", tdev->version.major);
+ err = -ENODEV;
goto stop;
}
__u8 tail;
struct uhid_event *outq[UHID_BUFSIZE];
+ /* blocking GET_REPORT support; state changes protected by qlock */
struct mutex report_lock;
wait_queue_head_t report_wait;
- atomic_t report_done;
- atomic_t report_id;
+ bool report_running;
+ u32 report_id;
+ u32 report_type;
struct uhid_event report_buf;
};
static int uhid_hid_start(struct hid_device *hid)
{
struct uhid_device *uhid = hid->driver_data;
+ struct uhid_event *ev;
+ unsigned long flags;
+
+ ev = kzalloc(sizeof(*ev), GFP_KERNEL);
+ if (!ev)
+ return -ENOMEM;
+
+ ev->type = UHID_START;
- return uhid_queue_event(uhid, UHID_START);
+ if (hid->report_enum[HID_FEATURE_REPORT].numbered)
+ ev->u.start.dev_flags |= UHID_DEV_NUMBERED_FEATURE_REPORTS;
+ if (hid->report_enum[HID_OUTPUT_REPORT].numbered)
+ ev->u.start.dev_flags |= UHID_DEV_NUMBERED_OUTPUT_REPORTS;
+ if (hid->report_enum[HID_INPUT_REPORT].numbered)
+ ev->u.start.dev_flags |= UHID_DEV_NUMBERED_INPUT_REPORTS;
+
+ spin_lock_irqsave(&uhid->qlock, flags);
+ uhid_queue(uhid, ev);
+ spin_unlock_irqrestore(&uhid->qlock, flags);
+
+ return 0;
}
static void uhid_hid_stop(struct hid_device *hid)
return hid_parse_report(hid, uhid->rd_data, uhid->rd_size);
}
-static int uhid_hid_get_raw(struct hid_device *hid, unsigned char rnum,
- __u8 *buf, size_t count, unsigned char rtype)
+/* must be called with report_lock held */
+static int __uhid_report_queue_and_wait(struct uhid_device *uhid,
+ struct uhid_event *ev,
+ __u32 *report_id)
+{
+ unsigned long flags;
+ int ret;
+
+ spin_lock_irqsave(&uhid->qlock, flags);
+ *report_id = ++uhid->report_id;
+ uhid->report_type = ev->type + 1;
+ uhid->report_running = true;
+ uhid_queue(uhid, ev);
+ spin_unlock_irqrestore(&uhid->qlock, flags);
+
+ ret = wait_event_interruptible_timeout(uhid->report_wait,
+ !uhid->report_running || !uhid->running,
+ 5 * HZ);
+ if (!ret || !uhid->running || uhid->report_running)
+ ret = -EIO;
+ else if (ret < 0)
+ ret = -ERESTARTSYS;
+ else
+ ret = 0;
+
+ uhid->report_running = false;
+
+ return ret;
+}
+
+static void uhid_report_wake_up(struct uhid_device *uhid, u32 id,
+ const struct uhid_event *ev)
+{
+ unsigned long flags;
+
+ spin_lock_irqsave(&uhid->qlock, flags);
+
+ /* id for old report; drop it silently */
+ if (uhid->report_type != ev->type || uhid->report_id != id)
+ goto unlock;
+ if (!uhid->report_running)
+ goto unlock;
+
+ memcpy(&uhid->report_buf, ev, sizeof(*ev));
+ uhid->report_running = false;
+ wake_up_interruptible(&uhid->report_wait);
+
+unlock:
+ spin_unlock_irqrestore(&uhid->qlock, flags);
+}
+
+static int uhid_hid_get_report(struct hid_device *hid, unsigned char rnum,
+ u8 *buf, size_t count, u8 rtype)
{
struct uhid_device *uhid = hid->driver_data;
- __u8 report_type;
+ struct uhid_get_report_reply_req *req;
struct uhid_event *ev;
- unsigned long flags;
int ret;
- size_t uninitialized_var(len);
- struct uhid_feature_answer_req *req;
if (!uhid->running)
return -EIO;
- switch (rtype) {
- case HID_FEATURE_REPORT:
- report_type = UHID_FEATURE_REPORT;
- break;
- case HID_OUTPUT_REPORT:
- report_type = UHID_OUTPUT_REPORT;
- break;
- case HID_INPUT_REPORT:
- report_type = UHID_INPUT_REPORT;
- break;
- default:
- return -EINVAL;
- }
+ ev = kzalloc(sizeof(*ev), GFP_KERNEL);
+ if (!ev)
+ return -ENOMEM;
+
+ ev->type = UHID_GET_REPORT;
+ ev->u.get_report.rnum = rnum;
+ ev->u.get_report.rtype = rtype;
ret = mutex_lock_interruptible(&uhid->report_lock);
- if (ret)
+ if (ret) {
+ kfree(ev);
return ret;
+ }
- ev = kzalloc(sizeof(*ev), GFP_KERNEL);
- if (!ev) {
- ret = -ENOMEM;
+ /* this _always_ takes ownership of @ev */
+ ret = __uhid_report_queue_and_wait(uhid, ev, &ev->u.get_report.id);
+ if (ret)
goto unlock;
+
+ req = &uhid->report_buf.u.get_report_reply;
+ if (req->err) {
+ ret = -EIO;
+ } else {
+ ret = min3(count, (size_t)req->size, (size_t)UHID_DATA_MAX);
+ memcpy(buf, req->data, ret);
}
- spin_lock_irqsave(&uhid->qlock, flags);
- ev->type = UHID_FEATURE;
- ev->u.feature.id = atomic_inc_return(&uhid->report_id);
- ev->u.feature.rnum = rnum;
- ev->u.feature.rtype = report_type;
+unlock:
+ mutex_unlock(&uhid->report_lock);
+ return ret;
+}
- atomic_set(&uhid->report_done, 0);
- uhid_queue(uhid, ev);
- spin_unlock_irqrestore(&uhid->qlock, flags);
+static int uhid_hid_set_report(struct hid_device *hid, unsigned char rnum,
+ const u8 *buf, size_t count, u8 rtype)
+{
+ struct uhid_device *uhid = hid->driver_data;
+ struct uhid_event *ev;
+ int ret;
- ret = wait_event_interruptible_timeout(uhid->report_wait,
- atomic_read(&uhid->report_done), 5 * HZ);
-
- /*
- * Make sure "uhid->running" is cleared on shutdown before
- * "uhid->report_done" is set.
- */
- smp_rmb();
- if (!ret || !uhid->running) {
- ret = -EIO;
- } else if (ret < 0) {
- ret = -ERESTARTSYS;
- } else {
- spin_lock_irqsave(&uhid->qlock, flags);
- req = &uhid->report_buf.u.feature_answer;
+ if (!uhid->running || count > UHID_DATA_MAX)
+ return -EIO;
- if (req->err) {
- ret = -EIO;
- } else {
- ret = 0;
- len = min(count,
- min_t(size_t, req->size, UHID_DATA_MAX));
- memcpy(buf, req->data, len);
- }
+ ev = kzalloc(sizeof(*ev), GFP_KERNEL);
+ if (!ev)
+ return -ENOMEM;
+
+ ev->type = UHID_SET_REPORT;
+ ev->u.set_report.rnum = rnum;
+ ev->u.set_report.rtype = rtype;
+ ev->u.set_report.size = count;
+ memcpy(ev->u.set_report.data, buf, count);
- spin_unlock_irqrestore(&uhid->qlock, flags);
+ ret = mutex_lock_interruptible(&uhid->report_lock);
+ if (ret) {
+ kfree(ev);
+ return ret;
}
- atomic_set(&uhid->report_done, 1);
+ /* this _always_ takes ownership of @ev */
+ ret = __uhid_report_queue_and_wait(uhid, ev, &ev->u.set_report.id);
+ if (ret)
+ goto unlock;
+
+ if (uhid->report_buf.u.set_report_reply.err)
+ ret = -EIO;
+ else
+ ret = count;
unlock:
mutex_unlock(&uhid->report_lock);
- return ret ? ret : len;
+ return ret;
+}
+
+static int uhid_hid_raw_request(struct hid_device *hid, unsigned char reportnum,
+ __u8 *buf, size_t len, unsigned char rtype,
+ int reqtype)
+{
+ u8 u_rtype;
+
+ switch (rtype) {
+ case HID_FEATURE_REPORT:
+ u_rtype = UHID_FEATURE_REPORT;
+ break;
+ case HID_OUTPUT_REPORT:
+ u_rtype = UHID_OUTPUT_REPORT;
+ break;
+ case HID_INPUT_REPORT:
+ u_rtype = UHID_INPUT_REPORT;
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ switch (reqtype) {
+ case HID_REQ_GET_REPORT:
+ return uhid_hid_get_report(hid, reportnum, buf, len, u_rtype);
+ case HID_REQ_SET_REPORT:
+ return uhid_hid_set_report(hid, reportnum, buf, len, u_rtype);
+ default:
+ return -EIO;
+ }
}
static int uhid_hid_output_raw(struct hid_device *hid, __u8 *buf, size_t count,
return uhid_hid_output_raw(hid, buf, count, HID_OUTPUT_REPORT);
}
-static int uhid_raw_request(struct hid_device *hid, unsigned char reportnum,
- __u8 *buf, size_t len, unsigned char rtype,
- int reqtype)
-{
- switch (reqtype) {
- case HID_REQ_GET_REPORT:
- return uhid_hid_get_raw(hid, reportnum, buf, len, rtype);
- case HID_REQ_SET_REPORT:
- /* TODO: implement proper SET_REPORT functionality */
- return -ENOSYS;
- default:
- return -EIO;
- }
-}
-
static struct hid_ll_driver uhid_hid_driver = {
.start = uhid_hid_start,
.stop = uhid_hid_stop,
.open = uhid_hid_open,
.close = uhid_hid_close,
.parse = uhid_hid_parse,
+ .raw_request = uhid_hid_raw_request,
.output_report = uhid_hid_output_report,
- .raw_request = uhid_raw_request,
};
#ifdef CONFIG_COMPAT
}
#endif
-static int uhid_dev_create(struct uhid_device *uhid,
- const struct uhid_event *ev)
+static int uhid_dev_create2(struct uhid_device *uhid,
+ const struct uhid_event *ev)
{
struct hid_device *hid;
+ size_t rd_size, len;
+ void *rd_data;
int ret;
if (uhid->running)
return -EALREADY;
- uhid->rd_size = ev->u.create.rd_size;
- if (uhid->rd_size <= 0 || uhid->rd_size > HID_MAX_DESCRIPTOR_SIZE)
+ rd_size = ev->u.create2.rd_size;
+ if (rd_size <= 0 || rd_size > HID_MAX_DESCRIPTOR_SIZE)
return -EINVAL;
- uhid->rd_data = kmalloc(uhid->rd_size, GFP_KERNEL);
- if (!uhid->rd_data)
+ rd_data = kmemdup(ev->u.create2.rd_data, rd_size, GFP_KERNEL);
+ if (!rd_data)
return -ENOMEM;
- if (copy_from_user(uhid->rd_data, ev->u.create.rd_data,
- uhid->rd_size)) {
- ret = -EFAULT;
- goto err_free;
- }
+ uhid->rd_size = rd_size;
+ uhid->rd_data = rd_data;
hid = hid_allocate_device();
if (IS_ERR(hid)) {
goto err_free;
}
- strncpy(hid->name, ev->u.create.name, 127);
- hid->name[127] = 0;
- strncpy(hid->phys, ev->u.create.phys, 63);
- hid->phys[63] = 0;
- strncpy(hid->uniq, ev->u.create.uniq, 63);
- hid->uniq[63] = 0;
+ len = min(sizeof(hid->name), sizeof(ev->u.create2.name)) - 1;
+ strncpy(hid->name, ev->u.create2.name, len);
+ len = min(sizeof(hid->phys), sizeof(ev->u.create2.phys)) - 1;
+ strncpy(hid->phys, ev->u.create2.phys, len);
+ len = min(sizeof(hid->uniq), sizeof(ev->u.create2.uniq)) - 1;
+ strncpy(hid->uniq, ev->u.create2.uniq, len);
hid->ll_driver = &uhid_hid_driver;
- hid->bus = ev->u.create.bus;
- hid->vendor = ev->u.create.vendor;
- hid->product = ev->u.create.product;
- hid->version = ev->u.create.version;
- hid->country = ev->u.create.country;
+ hid->bus = ev->u.create2.bus;
+ hid->vendor = ev->u.create2.vendor;
+ hid->product = ev->u.create2.product;
+ hid->version = ev->u.create2.version;
+ hid->country = ev->u.create2.country;
hid->driver_data = uhid;
hid->dev.parent = uhid_misc.this_device;
uhid->running = false;
err_free:
kfree(uhid->rd_data);
+ uhid->rd_data = NULL;
+ uhid->rd_size = 0;
return ret;
}
-static int uhid_dev_create2(struct uhid_device *uhid,
- const struct uhid_event *ev)
+static int uhid_dev_create(struct uhid_device *uhid,
+ struct uhid_event *ev)
{
- struct hid_device *hid;
- int ret;
+ struct uhid_create_req orig;
- if (uhid->running)
- return -EALREADY;
+ orig = ev->u.create;
- uhid->rd_size = ev->u.create2.rd_size;
- if (uhid->rd_size <= 0 || uhid->rd_size > HID_MAX_DESCRIPTOR_SIZE)
+ if (orig.rd_size <= 0 || orig.rd_size > HID_MAX_DESCRIPTOR_SIZE)
return -EINVAL;
+ if (copy_from_user(&ev->u.create2.rd_data, orig.rd_data, orig.rd_size))
+ return -EFAULT;
- uhid->rd_data = kmemdup(ev->u.create2.rd_data, uhid->rd_size,
- GFP_KERNEL);
- if (!uhid->rd_data)
- return -ENOMEM;
-
- hid = hid_allocate_device();
- if (IS_ERR(hid)) {
- ret = PTR_ERR(hid);
- goto err_free;
- }
-
- strncpy(hid->name, ev->u.create2.name, 127);
- hid->name[127] = 0;
- strncpy(hid->phys, ev->u.create2.phys, 63);
- hid->phys[63] = 0;
- strncpy(hid->uniq, ev->u.create2.uniq, 63);
- hid->uniq[63] = 0;
-
- hid->ll_driver = &uhid_hid_driver;
- hid->bus = ev->u.create2.bus;
- hid->vendor = ev->u.create2.vendor;
- hid->product = ev->u.create2.product;
- hid->version = ev->u.create2.version;
- hid->country = ev->u.create2.country;
- hid->driver_data = uhid;
- hid->dev.parent = uhid_misc.this_device;
-
- uhid->hid = hid;
- uhid->running = true;
-
- ret = hid_add_device(hid);
- if (ret) {
- hid_err(hid, "Cannot register HID device\n");
- goto err_hid;
- }
-
- return 0;
-
-err_hid:
- hid_destroy_device(hid);
- uhid->hid = NULL;
- uhid->running = false;
-err_free:
- kfree(uhid->rd_data);
- return ret;
+ memcpy(ev->u.create2.name, orig.name, sizeof(orig.name));
+ memcpy(ev->u.create2.phys, orig.phys, sizeof(orig.phys));
+ memcpy(ev->u.create2.uniq, orig.uniq, sizeof(orig.uniq));
+ ev->u.create2.rd_size = orig.rd_size;
+ ev->u.create2.bus = orig.bus;
+ ev->u.create2.vendor = orig.vendor;
+ ev->u.create2.product = orig.product;
+ ev->u.create2.version = orig.version;
+ ev->u.create2.country = orig.country;
+
+ return uhid_dev_create2(uhid, ev);
}
static int uhid_dev_destroy(struct uhid_device *uhid)
if (!uhid->running)
return -EINVAL;
- /* clear "running" before setting "report_done" */
uhid->running = false;
- smp_wmb();
- atomic_set(&uhid->report_done, 1);
wake_up_interruptible(&uhid->report_wait);
hid_destroy_device(uhid->hid);
return 0;
}
-static int uhid_dev_feature_answer(struct uhid_device *uhid,
- struct uhid_event *ev)
+static int uhid_dev_get_report_reply(struct uhid_device *uhid,
+ struct uhid_event *ev)
{
- unsigned long flags;
-
if (!uhid->running)
return -EINVAL;
- spin_lock_irqsave(&uhid->qlock, flags);
-
- /* id for old report; drop it silently */
- if (atomic_read(&uhid->report_id) != ev->u.feature_answer.id)
- goto unlock;
- if (atomic_read(&uhid->report_done))
- goto unlock;
+ uhid_report_wake_up(uhid, ev->u.get_report_reply.id, ev);
+ return 0;
+}
- memcpy(&uhid->report_buf, ev, sizeof(*ev));
- atomic_set(&uhid->report_done, 1);
- wake_up_interruptible(&uhid->report_wait);
+static int uhid_dev_set_report_reply(struct uhid_device *uhid,
+ struct uhid_event *ev)
+{
+ if (!uhid->running)
+ return -EINVAL;
-unlock:
- spin_unlock_irqrestore(&uhid->qlock, flags);
+ uhid_report_wake_up(uhid, ev->u.set_report_reply.id, ev);
return 0;
}
init_waitqueue_head(&uhid->waitq);
init_waitqueue_head(&uhid->report_wait);
uhid->running = false;
- atomic_set(&uhid->report_done, 1);
file->private_data = uhid;
nonseekable_open(inode, file);
case UHID_INPUT2:
ret = uhid_dev_input2(uhid, &uhid->input_buf);
break;
- case UHID_FEATURE_ANSWER:
- ret = uhid_dev_feature_answer(uhid, &uhid->input_buf);
+ case UHID_GET_REPORT_REPLY:
+ ret = uhid_dev_get_report_reply(uhid, &uhid->input_buf);
+ break;
+ case UHID_SET_REPORT_REPLY:
+ ret = uhid_dev_set_report_reply(uhid, &uhid->input_buf);
break;
default:
ret = -EOPNOTSUPP;
struct usbhid_device *usbhid = hid->driver_data;
spin_lock_irqsave(&usbhid->lock, flags);
- if (hid->open > 0 &&
+ if ((hid->open > 0 || hid->quirks & HID_QUIRK_ALWAYS_POLL) &&
!test_bit(HID_DISCONNECTED, &usbhid->iofl) &&
!test_bit(HID_SUSPENDED, &usbhid->iofl) &&
!test_and_set_bit(HID_IN_RUNNING, &usbhid->iofl)) {
case 0: /* success */
usbhid_mark_busy(usbhid);
usbhid->retry_delay = 0;
+ if ((hid->quirks & HID_QUIRK_ALWAYS_POLL) && !hid->open)
+ break;
hid_input_report(urb->context, HID_INPUT_REPORT,
urb->transfer_buffer,
urb->actual_length, 1);
if (!--hid->open) {
spin_unlock_irq(&usbhid->lock);
hid_cancel_delayed_stuff(usbhid);
- usb_kill_urb(usbhid->urbin);
- usbhid->intf->needs_remote_wakeup = 0;
+ if (!(hid->quirks & HID_QUIRK_ALWAYS_POLL)) {
+ usb_kill_urb(usbhid->urbin);
+ usbhid->intf->needs_remote_wakeup = 0;
+ }
} else {
spin_unlock_irq(&usbhid->lock);
}
set_bit(HID_STARTED, &usbhid->iofl);
+ if (hid->quirks & HID_QUIRK_ALWAYS_POLL) {
+ ret = usb_autopm_get_interface(usbhid->intf);
+ if (ret)
+ goto fail;
+ usbhid->intf->needs_remote_wakeup = 1;
+ ret = hid_start_in(hid);
+ if (ret) {
+ dev_err(&hid->dev,
+ "failed to start in urb: %d\n", ret);
+ }
+ usb_autopm_put_interface(usbhid->intf);
+ }
+
/* Some keyboards don't work until their LEDs have been set.
* Since BIOSes do set the LEDs, it must be safe for any device
* that supports the keyboard boot protocol.
if (WARN_ON(!usbhid))
return;
+ if (hid->quirks & HID_QUIRK_ALWAYS_POLL)
+ usbhid->intf->needs_remote_wakeup = 0;
+
clear_bit(HID_STARTED, &usbhid->iofl);
spin_lock_irq(&usbhid->lock); /* Sync with error and led handlers */
set_bit(HID_DISCONNECTED, &usbhid->iofl);
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_3AXIS_5BUTTON_STICK, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_CH, USB_DEVICE_ID_CH_AXIS_295, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_DMI, USB_DEVICE_ID_DMI_ENC, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_ELAN, USB_DEVICE_ID_ELAN_TOUCHSCREEN, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_ELO, USB_DEVICE_ID_ELO_TS2700, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_FORMOSA, USB_DEVICE_ID_FORMOSA_IR_RECEIVER, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_FREESCALE, USB_DEVICE_ID_FREESCALE_MX28, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_NOVATEK, USB_DEVICE_ID_NOVATEK_MOUSE, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1610, HID_QUIRK_NOGET },
{ USB_VENDOR_ID_PENMOUNT, USB_DEVICE_ID_PENMOUNT_1640, HID_QUIRK_NOGET },
+ { USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE, HID_QUIRK_ALWAYS_POLL },
+ { USB_VENDOR_ID_KYE, USB_DEVICE_ID_PIXART_USB_OPTICAL_MOUSE_ID2, HID_QUIRK_ALWAYS_POLL },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN1, HID_QUIRK_NO_INIT_REPORTS },
{ USB_VENDOR_ID_PIXART, USB_DEVICE_ID_PIXART_OPTICAL_TOUCH_SCREEN2, HID_QUIRK_NO_INIT_REPORTS },
#include <linux/slab.h>
#include <linux/module.h>
#include <linux/mod_devicetable.h>
+#include <linux/hid.h>
#include <linux/usb/input.h>
#include <linux/power_supply.h>
#include <asm/unaligned.h>
struct wacom_wac *wacom_wac);
int wacom_setup_pad_input_capabilities(struct input_dev *input_dev,
struct wacom_wac *wacom_wac);
+void wacom_wac_usage_mapping(struct hid_device *hdev,
+ struct hid_field *field, struct hid_usage *usage);
+int wacom_wac_event(struct hid_device *hdev, struct hid_field *field,
+ struct hid_usage *usage, __s32 value);
+void wacom_wac_report(struct hid_device *hdev, struct hid_report *report);
#endif
#include "wacom_wac.h"
#include "wacom.h"
-#include <linux/hid.h>
#define WAC_MSG_RETRIES 5
+#define WAC_CMD_WL_LED_CONTROL 0x03
#define WAC_CMD_LED_CONTROL 0x20
#define WAC_CMD_ICON_START 0x21
#define WAC_CMD_ICON_XFER 0x23
#define WAC_CMD_ICON_BT_XFER 0x26
#define WAC_CMD_RETRIES 10
-static int wacom_get_report(struct hid_device *hdev, u8 type, u8 id,
- void *buf, size_t size, unsigned int retries)
+#define DEV_ATTR_RW_PERM (S_IRUGO | S_IWUSR | S_IWGRP)
+#define DEV_ATTR_WO_PERM (S_IWUSR | S_IWGRP)
+
+static int wacom_get_report(struct hid_device *hdev, u8 type, u8 *buf,
+ size_t size, unsigned int retries)
{
int retval;
do {
- retval = hid_hw_raw_request(hdev, id, buf, size, type,
+ retval = hid_hw_raw_request(hdev, buf[0], buf, size, type,
HID_REQ_GET_REPORT);
} while ((retval == -ETIMEDOUT || retval == -EPIPE) && --retries);
{
struct wacom *wacom = hid_get_drvdata(hdev);
struct wacom_features *features = &wacom->wacom_wac.features;
+ struct hid_data *hid_data = &wacom->wacom_wac.hid_data;
+ u8 *data;
+ int ret;
switch (usage->hid) {
case HID_DG_CONTACTMAX:
/* leave touch_max as is if predefined */
- if (!features->touch_max)
- features->touch_max = field->value[0];
+ if (!features->touch_max) {
+ /* read manually */
+ data = kzalloc(2, GFP_KERNEL);
+ if (!data)
+ break;
+ data[0] = field->report->id;
+ ret = wacom_get_report(hdev, HID_FEATURE_REPORT,
+ data, 2, 0);
+ if (ret == 2)
+ features->touch_max = data[1];
+ kfree(data);
+ }
+ break;
+ case HID_DG_INPUTMODE:
+ /* Ignore if value index is out of bounds. */
+ if (usage->usage_index >= field->report_count) {
+ dev_err(&hdev->dev, "HID_DG_INPUTMODE out of range\n");
+ break;
+ }
+
+ hid_data->inputmode = field->report->id;
+ hid_data->inputmode_index = usage->usage_index;
break;
}
}
features->pressure_max = field->logical_maximum;
break;
}
+
+ if (features->type == HID_GENERIC)
+ wacom_wac_usage_mapping(hdev, field, usage);
}
static void wacom_parse_hid(struct hid_device *hdev,
}
}
+static int wacom_hid_set_device_mode(struct hid_device *hdev)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct hid_data *hid_data = &wacom->wacom_wac.hid_data;
+ struct hid_report *r;
+ struct hid_report_enum *re;
+
+ if (hid_data->inputmode < 0)
+ return 0;
+
+ re = &(hdev->report_enum[HID_FEATURE_REPORT]);
+ r = re->report_id_hash[hid_data->inputmode];
+ if (r) {
+ r->field[0]->value[hid_data->inputmode_index] = 2;
+ hid_hw_request(hdev, r, HID_REQ_SET_REPORT);
+ }
+ return 0;
+}
+
static int wacom_set_device_mode(struct hid_device *hdev, int report_id,
int length, int mode)
{
length, 1);
if (error >= 0)
error = wacom_get_report(hdev, HID_FEATURE_REPORT,
- report_id, rep_data, length, 1);
+ rep_data, length, 1);
} while ((error < 0 || rep_data[1] != mode) && limit++ < WAC_MSG_RETRIES);
kfree(rep_data);
if (hdev->bus == BUS_BLUETOOTH)
return wacom_bt_query_tablet_data(hdev, 1, features);
+ if (features->type == HID_GENERIC)
+ return wacom_hid_set_device_mode(hdev);
+
if (features->device_type == BTN_TOOL_FINGER) {
if (features->type > TABLETPC) {
/* MT Tablet PC touch */
{
unsigned char *buf;
int retval;
+ unsigned char report_id = WAC_CMD_LED_CONTROL;
+ int buf_size = 9;
- buf = kzalloc(9, GFP_KERNEL);
+ if (wacom->wacom_wac.pid) { /* wireless connected */
+ report_id = WAC_CMD_WL_LED_CONTROL;
+ buf_size = 13;
+ }
+ buf = kzalloc(buf_size, GFP_KERNEL);
if (!buf)
return -ENOMEM;
int ring_led = wacom->led.select[0] & 0x03;
int ring_lum = (((wacom->led.llv & 0x60) >> 5) - 1) & 0x03;
int crop_lum = 0;
-
- buf[0] = WAC_CMD_LED_CONTROL;
- buf[1] = (crop_lum << 4) | (ring_lum << 2) | (ring_led);
+ unsigned char led_bits = (crop_lum << 4) | (ring_lum << 2) | (ring_led);
+
+ buf[0] = report_id;
+ if (wacom->wacom_wac.pid) {
+ wacom_get_report(wacom->hdev, HID_FEATURE_REPORT,
+ buf, buf_size, WAC_CMD_RETRIES);
+ buf[0] = report_id;
+ buf[4] = led_bits;
+ } else
+ buf[1] = led_bits;
}
else {
int led = wacom->led.select[0] | 0x4;
wacom->wacom_wac.features.type == WACOM_24HD)
led |= (wacom->led.select[1] << 4) | 0x40;
- buf[0] = WAC_CMD_LED_CONTROL;
+ buf[0] = report_id;
buf[1] = led;
buf[2] = wacom->led.llv;
buf[3] = wacom->led.hlv;
buf[4] = wacom->led.img_lum;
}
- retval = wacom_set_report(wacom->hdev, HID_FEATURE_REPORT, buf, 9,
+ retval = wacom_set_report(wacom->hdev, HID_FEATURE_REPORT, buf, buf_size,
WAC_CMD_RETRIES);
kfree(buf);
{ \
struct hid_device *hdev = container_of(dev, struct hid_device, dev);\
struct wacom *wacom = hid_get_drvdata(hdev); \
- return snprintf(buf, 2, "%d\n", wacom->led.select[SET_ID]); \
+ return scnprintf(buf, PAGE_SIZE, "%d\n", \
+ wacom->led.select[SET_ID]); \
} \
-static DEVICE_ATTR(status_led##SET_ID##_select, S_IWUSR | S_IRUSR, \
+static DEVICE_ATTR(status_led##SET_ID##_select, DEV_ATTR_RW_PERM, \
wacom_led##SET_ID##_select_show, \
wacom_led##SET_ID##_select_store)
return wacom_luminance_store(wacom, &wacom->led.field, \
buf, count); \
} \
-static DEVICE_ATTR(name##_luminance, S_IWUSR, \
- NULL, wacom_##name##_luminance_store)
+static ssize_t wacom_##name##_luminance_show(struct device *dev, \
+ struct device_attribute *attr, char *buf) \
+{ \
+ struct wacom *wacom = dev_get_drvdata(dev); \
+ return scnprintf(buf, PAGE_SIZE, "%d\n", wacom->led.field); \
+} \
+static DEVICE_ATTR(name##_luminance, DEV_ATTR_RW_PERM, \
+ wacom_##name##_luminance_show, \
+ wacom_##name##_luminance_store)
DEVICE_LUMINANCE_ATTR(status0, llv);
DEVICE_LUMINANCE_ATTR(status1, hlv);
{ \
return wacom_button_image_store(dev, BUTTON_ID, buf, count); \
} \
-static DEVICE_ATTR(button##BUTTON_ID##_rawimg, S_IWUSR, \
+static DEVICE_ATTR(button##BUTTON_ID##_rawimg, DEV_ATTR_WO_PERM, \
NULL, wacom_btnimg##BUTTON_ID##_store)
DEVICE_BTNIMG_ATTR(0);
return count;
}
-static DEVICE_ATTR(speed, S_IRUGO | S_IWUSR | S_IWGRP,
+static DEVICE_ATTR(speed, DEV_ATTR_RW_PERM,
wacom_show_speed, wacom_store_speed);
static struct input_dev *wacom_allocate_input(struct wacom *wacom)
input_dev->uniq = hdev->uniq;
input_dev->id.bustype = hdev->bus;
input_dev->id.vendor = hdev->vendor;
- input_dev->id.product = hdev->product;
+ input_dev->id.product = wacom_wac->pid ? wacom_wac->pid : hdev->product;
input_dev->id.version = hdev->version;
input_set_drvdata(input_dev, wacom);
return input_dev;
}
-static void wacom_unregister_inputs(struct wacom *wacom)
+static void wacom_free_inputs(struct wacom *wacom)
{
- if (wacom->wacom_wac.input)
- input_unregister_device(wacom->wacom_wac.input);
- if (wacom->wacom_wac.pad_input)
- input_unregister_device(wacom->wacom_wac.pad_input);
- wacom->wacom_wac.input = NULL;
- wacom->wacom_wac.pad_input = NULL;
+ struct wacom_wac *wacom_wac = &(wacom->wacom_wac);
+
+ if (wacom_wac->input)
+ input_free_device(wacom_wac->input);
+ if (wacom_wac->pad_input)
+ input_free_device(wacom_wac->pad_input);
+ wacom_wac->input = NULL;
+ wacom_wac->pad_input = NULL;
}
-static int wacom_register_inputs(struct wacom *wacom)
+static int wacom_allocate_inputs(struct wacom *wacom)
{
struct input_dev *input_dev, *pad_input_dev;
struct wacom_wac *wacom_wac = &(wacom->wacom_wac);
- int error;
input_dev = wacom_allocate_input(wacom);
pad_input_dev = wacom_allocate_input(wacom);
if (!input_dev || !pad_input_dev) {
- error = -ENOMEM;
- goto fail1;
+ wacom_free_inputs(wacom);
+ return -ENOMEM;
}
wacom_wac->input = input_dev;
wacom_wac->pad_input = pad_input_dev;
wacom_wac->pad_input->name = wacom_wac->pad_name;
+ return 0;
+}
+
+static void wacom_clean_inputs(struct wacom *wacom)
+{
+ if (wacom->wacom_wac.input) {
+ if (wacom->wacom_wac.input_registered)
+ input_unregister_device(wacom->wacom_wac.input);
+ else
+ input_free_device(wacom->wacom_wac.input);
+ }
+ if (wacom->wacom_wac.pad_input) {
+ if (wacom->wacom_wac.input_registered)
+ input_unregister_device(wacom->wacom_wac.pad_input);
+ else
+ input_free_device(wacom->wacom_wac.pad_input);
+ }
+ wacom->wacom_wac.input = NULL;
+ wacom->wacom_wac.pad_input = NULL;
+ wacom_destroy_leds(wacom);
+}
+
+static int wacom_register_inputs(struct wacom *wacom)
+{
+ struct input_dev *input_dev, *pad_input_dev;
+ struct wacom_wac *wacom_wac = &(wacom->wacom_wac);
+ int error;
+
+ input_dev = wacom_wac->input;
+ pad_input_dev = wacom_wac->pad_input;
+
+ if (!input_dev || !pad_input_dev)
+ return -EINVAL;
+
error = wacom_setup_input_capabilities(input_dev, wacom_wac);
if (error)
- goto fail2;
+ return error;
error = input_register_device(input_dev);
if (error)
- goto fail2;
+ return error;
error = wacom_setup_pad_input_capabilities(pad_input_dev, wacom_wac);
if (error) {
} else {
error = input_register_device(pad_input_dev);
if (error)
- goto fail3;
+ goto fail_register_pad_input;
+
+ error = wacom_initialize_leds(wacom);
+ if (error)
+ goto fail_leds;
}
+ wacom_wac->input_registered = true;
+
return 0;
-fail3:
+fail_leds:
+ input_unregister_device(pad_input_dev);
+ pad_input_dev = NULL;
+fail_register_pad_input:
input_unregister_device(input_dev);
- input_dev = NULL;
-fail2:
wacom_wac->input = NULL;
- wacom_wac->pad_input = NULL;
-fail1:
- if (input_dev)
- input_free_device(input_dev);
- if (pad_input_dev)
- input_free_device(pad_input_dev);
return error;
}
hdev1 = usb_get_intfdata(usbdev->config->interface[1]);
wacom1 = hid_get_drvdata(hdev1);
wacom_wac1 = &(wacom1->wacom_wac);
- wacom_unregister_inputs(wacom1);
+ wacom_clean_inputs(wacom1);
/* Touch interface */
hdev2 = usb_get_intfdata(usbdev->config->interface[2]);
wacom2 = hid_get_drvdata(hdev2);
wacom_wac2 = &(wacom2->wacom_wac);
- wacom_unregister_inputs(wacom2);
+ wacom_clean_inputs(wacom2);
if (wacom_wac->pid == 0) {
hid_info(wacom->hdev, "wireless tablet disconnected\n");
wacom_wac1->features.name);
wacom_wac1->shared->touch_max = wacom_wac1->features.touch_max;
wacom_wac1->shared->type = wacom_wac1->features.type;
- error = wacom_register_inputs(wacom1);
+ wacom_wac1->pid = wacom_wac->pid;
+ error = wacom_allocate_inputs(wacom1) ||
+ wacom_register_inputs(wacom1);
if (error)
goto fail;
"%s (WL) Pad",wacom_wac2->features.name);
snprintf(wacom_wac2->pad_name, WACOM_NAME_MAX,
"%s (WL) Pad", wacom_wac2->features.name);
- error = wacom_register_inputs(wacom2);
+ wacom_wac2->pid = wacom_wac->pid;
+ error = wacom_allocate_inputs(wacom2) ||
+ wacom_register_inputs(wacom2);
if (error)
goto fail;
return;
fail:
- wacom_unregister_inputs(wacom1);
- wacom_unregister_inputs(wacom2);
+ wacom_clean_inputs(wacom1);
+ wacom_clean_inputs(wacom2);
return;
}
struct wacom_wac *wacom_wac;
struct wacom_features *features;
int error;
+ unsigned int connect_mask = HID_CONNECT_HIDRAW;
if (!id->driver_data)
return -EINVAL;
+ hdev->quirks |= HID_QUIRK_NO_INIT_REPORTS;
+
wacom = kzalloc(sizeof(struct wacom), GFP_KERNEL);
if (!wacom)
return -ENOMEM;
error = hid_parse(hdev);
if (error) {
hid_err(hdev, "parse failed\n");
- goto fail1;
+ goto fail_parse;
}
wacom_wac = &wacom->wacom_wac;
features->pktlen = wacom_compute_pktlen(hdev);
if (features->pktlen > WACOM_PKGLEN_MAX) {
error = -EINVAL;
- goto fail1;
+ goto fail_pktlen;
}
if (features->check_for_hid_type && features->hid_type != hdev->type) {
error = -ENODEV;
- goto fail1;
+ goto fail_type;
}
wacom->usbdev = dev;
mutex_init(&wacom->lock);
INIT_WORK(&wacom->work, wacom_wireless_work);
+ if (!(features->quirks & WACOM_QUIRK_NO_INPUT)) {
+ error = wacom_allocate_inputs(wacom);
+ if (error)
+ goto fail_allocate_inputs;
+ }
+
/* set the default size in case we do not get them from hid */
wacom_set_default_phy(features);
error = wacom_add_shared_data(hdev);
if (error)
- goto fail1;
+ goto fail_shared_data;
}
- error = wacom_initialize_leds(wacom);
- if (error)
- goto fail2;
-
if (!(features->quirks & WACOM_QUIRK_MONITOR) &&
(features->quirks & WACOM_QUIRK_BATTERY)) {
error = wacom_initialize_battery(wacom);
if (error)
- goto fail3;
+ goto fail_battery;
}
if (!(features->quirks & WACOM_QUIRK_NO_INPUT)) {
error = wacom_register_inputs(wacom);
if (error)
- goto fail4;
+ goto fail_register_inputs;
}
if (hdev->bus == BUS_BLUETOOTH) {
error);
}
- /* Note that if query fails it is not a hard failure */
- wacom_query_tablet_data(hdev, features);
+ if (features->type == HID_GENERIC)
+ connect_mask |= HID_CONNECT_DRIVER;
/* Regular HID work starts now */
- error = hid_hw_start(hdev, HID_CONNECT_HIDRAW);
+ error = hid_hw_start(hdev, connect_mask);
if (error) {
hid_err(hdev, "hw start failed\n");
- goto fail5;
+ goto fail_hw_start;
}
+ /* Note that if query fails it is not a hard failure */
+ wacom_query_tablet_data(hdev, features);
+
if (features->quirks & WACOM_QUIRK_MONITOR)
error = hid_hw_open(hdev);
return 0;
- fail5: if (hdev->bus == BUS_BLUETOOTH)
+fail_hw_start:
+ if (hdev->bus == BUS_BLUETOOTH)
device_remove_file(&hdev->dev, &dev_attr_speed);
- wacom_unregister_inputs(wacom);
- fail4: wacom_destroy_battery(wacom);
- fail3: wacom_destroy_leds(wacom);
- fail2: wacom_remove_shared_data(wacom_wac);
- fail1: kfree(wacom);
+fail_register_inputs:
+ wacom_clean_inputs(wacom);
+ wacom_destroy_battery(wacom);
+fail_battery:
+ wacom_remove_shared_data(wacom_wac);
+fail_shared_data:
+ wacom_clean_inputs(wacom);
+fail_allocate_inputs:
+fail_type:
+fail_pktlen:
+fail_parse:
+ kfree(wacom);
hid_set_drvdata(hdev, NULL);
return error;
}
hid_hw_stop(hdev);
cancel_work_sync(&wacom->work);
- wacom_unregister_inputs(wacom);
+ wacom_clean_inputs(wacom);
if (hdev->bus == BUS_BLUETOOTH)
device_remove_file(&hdev->dev, &dev_attr_speed);
wacom_destroy_battery(wacom);
- wacom_destroy_leds(wacom);
wacom_remove_shared_data(&wacom->wacom_wac);
hid_set_drvdata(hdev, NULL);
.id_table = wacom_ids,
.probe = wacom_probe,
.remove = wacom_remove,
+ .event = wacom_wac_event,
+ .report = wacom_wac_report,
#ifdef CONFIG_PM
.resume = wacom_resume,
.reset_resume = wacom_reset_resume,
return 0;
}
+static void wacom_map_usage(struct wacom *wacom, struct hid_usage *usage,
+ struct hid_field *field, __u8 type, __u16 code, int fuzz)
+{
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct input_dev *input = wacom_wac->input;
+ int fmin = field->logical_minimum;
+ int fmax = field->logical_maximum;
+
+ usage->type = type;
+ usage->code = code;
+
+ set_bit(type, input->evbit);
+
+ switch (type) {
+ case EV_ABS:
+ input_set_abs_params(input, code, fmin, fmax, fuzz, 0);
+ input_abs_set_res(input, code,
+ hidinput_calc_abs_res(field, code));
+ break;
+ case EV_KEY:
+ input_set_capability(input, EV_KEY, code);
+ break;
+ case EV_MSC:
+ input_set_capability(input, EV_MSC, code);
+ break;
+ }
+}
+
+static void wacom_wac_pen_usage_mapping(struct hid_device *hdev,
+ struct hid_field *field, struct hid_usage *usage)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+
+ switch (usage->hid) {
+ case HID_GD_X:
+ wacom_map_usage(wacom, usage, field, EV_ABS, ABS_X, 4);
+ break;
+ case HID_GD_Y:
+ wacom_map_usage(wacom, usage, field, EV_ABS, ABS_Y, 4);
+ break;
+ case HID_DG_TIPPRESSURE:
+ wacom_map_usage(wacom, usage, field, EV_ABS, ABS_PRESSURE, 0);
+ break;
+ case HID_DG_INRANGE:
+ wacom_map_usage(wacom, usage, field, EV_KEY, BTN_TOOL_PEN, 0);
+ break;
+ case HID_DG_INVERT:
+ wacom_map_usage(wacom, usage, field, EV_KEY,
+ BTN_TOOL_RUBBER, 0);
+ break;
+ case HID_DG_ERASER:
+ case HID_DG_TIPSWITCH:
+ wacom_map_usage(wacom, usage, field, EV_KEY, BTN_TOUCH, 0);
+ break;
+ case HID_DG_BARRELSWITCH:
+ wacom_map_usage(wacom, usage, field, EV_KEY, BTN_STYLUS, 0);
+ break;
+ case HID_DG_BARRELSWITCH2:
+ wacom_map_usage(wacom, usage, field, EV_KEY, BTN_STYLUS2, 0);
+ break;
+ case HID_DG_TOOLSERIALNUMBER:
+ wacom_map_usage(wacom, usage, field, EV_MSC, MSC_SERIAL, 0);
+ break;
+ }
+}
+
+static int wacom_wac_pen_event(struct hid_device *hdev, struct hid_field *field,
+ struct hid_usage *usage, __s32 value)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct input_dev *input = wacom_wac->input;
+
+ /* checking which Tool / tip switch to send */
+ switch (usage->hid) {
+ case HID_DG_INRANGE:
+ wacom_wac->hid_data.inrange_state = value;
+ return 0;
+ case HID_DG_INVERT:
+ wacom_wac->hid_data.invert_state = value;
+ return 0;
+ case HID_DG_ERASER:
+ case HID_DG_TIPSWITCH:
+ wacom_wac->hid_data.tipswitch |= value;
+ return 0;
+ }
+
+ /* send pen events only when touch is up or forced out */
+ if (!usage->type || wacom_wac->shared->touch_down)
+ return 0;
+
+ input_event(input, usage->type, usage->code, value);
+
+ return 0;
+}
+
+static void wacom_wac_pen_report(struct hid_device *hdev,
+ struct hid_report *report)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct input_dev *input = wacom_wac->input;
+ bool prox = wacom_wac->hid_data.inrange_state;
+
+ if (!wacom_wac->shared->stylus_in_proximity) /* first in prox */
+ /* Going into proximity select tool */
+ wacom_wac->tool[0] = wacom_wac->hid_data.invert_state ?
+ BTN_TOOL_RUBBER : BTN_TOOL_PEN;
+
+ /* keep pen state for touch events */
+ wacom_wac->shared->stylus_in_proximity = prox;
+
+ /* send pen events only when touch is up or forced out */
+ if (!wacom_wac->shared->touch_down) {
+ input_report_key(input, BTN_TOUCH,
+ wacom_wac->hid_data.tipswitch);
+ input_report_key(input, wacom_wac->tool[0], prox);
+
+ wacom_wac->hid_data.tipswitch = false;
+
+ input_sync(input);
+ }
+}
+
+static void wacom_wac_finger_usage_mapping(struct hid_device *hdev,
+ struct hid_field *field, struct hid_usage *usage)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct input_dev *input = wacom_wac->input;
+ unsigned touch_max = wacom_wac->features.touch_max;
+
+ switch (usage->hid) {
+ case HID_GD_X:
+ if (touch_max == 1)
+ wacom_map_usage(wacom, usage, field, EV_ABS, ABS_X, 4);
+ else
+ wacom_map_usage(wacom, usage, field, EV_ABS,
+ ABS_MT_POSITION_X, 4);
+ break;
+ case HID_GD_Y:
+ if (touch_max == 1)
+ wacom_map_usage(wacom, usage, field, EV_ABS, ABS_Y, 4);
+ else
+ wacom_map_usage(wacom, usage, field, EV_ABS,
+ ABS_MT_POSITION_Y, 4);
+ break;
+ case HID_DG_CONTACTID:
+ input_mt_init_slots(input, wacom_wac->features.touch_max,
+ INPUT_MT_DIRECT);
+ break;
+ case HID_DG_INRANGE:
+ break;
+ case HID_DG_INVERT:
+ break;
+ case HID_DG_TIPSWITCH:
+ wacom_map_usage(wacom, usage, field, EV_KEY, BTN_TOUCH, 0);
+ break;
+ }
+}
+
+static int wacom_wac_finger_event(struct hid_device *hdev,
+ struct hid_field *field, struct hid_usage *usage, __s32 value)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+
+ switch (usage->hid) {
+ case HID_GD_X:
+ wacom_wac->hid_data.x = value;
+ break;
+ case HID_GD_Y:
+ wacom_wac->hid_data.y = value;
+ break;
+ case HID_DG_CONTACTID:
+ wacom_wac->hid_data.id = value;
+ break;
+ case HID_DG_TIPSWITCH:
+ wacom_wac->hid_data.tipswitch = value;
+ break;
+ }
+
+
+ return 0;
+}
+
+static void wacom_wac_finger_mt_report(struct wacom_wac *wacom_wac,
+ struct input_dev *input, bool touch)
+{
+ int slot;
+ struct hid_data *hid_data = &wacom_wac->hid_data;
+
+ slot = input_mt_get_slot_by_key(input, hid_data->id);
+
+ input_mt_slot(input, slot);
+ input_mt_report_slot_state(input, MT_TOOL_FINGER, touch);
+ if (touch) {
+ input_report_abs(input, ABS_MT_POSITION_X, hid_data->x);
+ input_report_abs(input, ABS_MT_POSITION_Y, hid_data->y);
+ }
+ input_mt_sync_frame(input);
+}
+
+static void wacom_wac_finger_single_touch_report(struct wacom_wac *wacom_wac,
+ struct input_dev *input, bool touch)
+{
+ struct hid_data *hid_data = &wacom_wac->hid_data;
+
+ if (touch) {
+ input_report_abs(input, ABS_X, hid_data->x);
+ input_report_abs(input, ABS_Y, hid_data->y);
+ }
+ input_report_key(input, BTN_TOUCH, touch);
+}
+
+static void wacom_wac_finger_report(struct hid_device *hdev,
+ struct hid_report *report)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct input_dev *input = wacom_wac->input;
+ bool touch = wacom_wac->hid_data.tipswitch &&
+ !wacom_wac->shared->stylus_in_proximity;
+ unsigned touch_max = wacom_wac->features.touch_max;
+
+ if (touch_max > 1)
+ wacom_wac_finger_mt_report(wacom_wac, input, touch);
+ else
+ wacom_wac_finger_single_touch_report(wacom_wac, input, touch);
+ input_sync(input);
+
+ /* keep touch state for pen event */
+ wacom_wac->shared->touch_down = touch;
+}
+
+#define WACOM_PEN_FIELD(f) (((f)->logical == HID_DG_STYLUS) || \
+ ((f)->physical == HID_DG_STYLUS))
+#define WACOM_FINGER_FIELD(f) (((f)->logical == HID_DG_FINGER) || \
+ ((f)->physical == HID_DG_FINGER))
+
+void wacom_wac_usage_mapping(struct hid_device *hdev,
+ struct hid_field *field, struct hid_usage *usage)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct input_dev *input = wacom_wac->input;
+
+ /* currently, only direct devices have proper hid report descriptors */
+ __set_bit(INPUT_PROP_DIRECT, input->propbit);
+
+ if (WACOM_PEN_FIELD(field))
+ return wacom_wac_pen_usage_mapping(hdev, field, usage);
+
+ if (WACOM_FINGER_FIELD(field))
+ return wacom_wac_finger_usage_mapping(hdev, field, usage);
+}
+
+int wacom_wac_event(struct hid_device *hdev, struct hid_field *field,
+ struct hid_usage *usage, __s32 value)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+
+ if (wacom->wacom_wac.features.type != HID_GENERIC)
+ return 0;
+
+ if (WACOM_PEN_FIELD(field))
+ return wacom_wac_pen_event(hdev, field, usage, value);
+
+ if (WACOM_FINGER_FIELD(field))
+ return wacom_wac_finger_event(hdev, field, usage, value);
+
+ return 0;
+}
+
+void wacom_wac_report(struct hid_device *hdev, struct hid_report *report)
+{
+ struct wacom *wacom = hid_get_drvdata(hdev);
+ struct wacom_wac *wacom_wac = &wacom->wacom_wac;
+ struct hid_field *field = report->field[0];
+
+ if (wacom_wac->features.type != HID_GENERIC)
+ return;
+
+ if (WACOM_PEN_FIELD(field))
+ return wacom_wac_pen_report(hdev, report);
+
+ if (WACOM_FINGER_FIELD(field))
+ return wacom_wac_finger_report(hdev, report);
+}
+
static int wacom_bpt_touch(struct wacom_wac *wacom)
{
struct wacom_features *features = &wacom->features;
input_dev->evbit[0] |= BIT_MASK(EV_KEY) | BIT_MASK(EV_ABS);
+ if (features->type == HID_GENERIC)
+ /* setup has already been done */
+ return 0;
+
__set_bit(BTN_TOUCH, input_dev->keybit);
__set_bit(ABS_MISC, input_dev->absbit);
input_set_abs_params(input_dev, ABS_X, 0, 1, 0, 0);
input_set_abs_params(input_dev, ABS_Y, 0, 1, 0, 0);
+ /* kept for making udev and libwacom accepting the pad */
+ __set_bit(BTN_STYLUS, input_dev->keybit);
+
switch (features->type) {
case GRAPHIRE_BT:
__set_bit(BTN_0, input_dev->keybit);
{ "Wacom ISDv5 309", .type = WACOM_24HDT, /* Touch */
.oVid = USB_VENDOR_ID_WACOM, .oPid = 0x0307, .touch_max = 10,
.check_for_hid_type = true, .hid_type = HID_TYPE_USBNONE };
+static const struct wacom_features wacom_features_0x30A =
+ { "Wacom ISDv5 30A", 59352, 33648, 2047, 63,
+ CINTIQ_HYBRID, WACOM_INTUOS3_RES, WACOM_INTUOS3_RES, 200, 200,
+ .oVid = USB_VENDOR_ID_WACOM, .oPid = 0x30C };
+static const struct wacom_features wacom_features_0x30C =
+ { "Wacom ISDv5 30C", .type = WACOM_24HDT, /* Touch */
+ .oVid = USB_VENDOR_ID_WACOM, .oPid = 0x30A, .touch_max = 10,
+ .check_for_hid_type = true, .hid_type = HID_TYPE_USBNONE };
+
+static const struct wacom_features wacom_features_HID_ANY_ID =
+ { "Wacom HID", .type = HID_GENERIC };
#define USB_DEVICE_WACOM(prod) \
HID_DEVICE(BUS_USB, HID_GROUP_WACOM, USB_VENDOR_ID_WACOM, prod),\
{ USB_DEVICE_WACOM(0x304) },
{ USB_DEVICE_WACOM(0x307) },
{ USB_DEVICE_WACOM(0x309) },
+ { USB_DEVICE_WACOM(0x30A) },
+ { USB_DEVICE_WACOM(0x30C) },
{ USB_DEVICE_WACOM(0x30E) },
{ USB_DEVICE_WACOM(0x314) },
{ USB_DEVICE_WACOM(0x315) },
{ USB_DEVICE_WACOM(0x4004) },
{ USB_DEVICE_WACOM(0x5000) },
{ USB_DEVICE_WACOM(0x5002) },
+
+ { USB_DEVICE_WACOM(HID_ANY_ID) },
{ }
};
MODULE_DEVICE_TABLE(hid, wacom_ids);
MTSCREEN,
MTTPC,
MTTPC_B,
+ HID_GENERIC,
MAX_TYPE
};
struct input_dev *touch_input;
};
+struct hid_data {
+ __s16 inputmode; /* InputMode HID feature, -1 if non-existent */
+ __s16 inputmode_index; /* InputMode HID feature index in the report */
+ bool inrange_state;
+ bool invert_state;
+ bool tipswitch;
+ int x;
+ int y;
+ int pressure;
+ int width;
+ int height;
+ int id;
+};
+
struct wacom_wac {
char name[WACOM_NAME_MAX];
char pad_name[WACOM_NAME_MAX];
struct wacom_shared *shared;
struct input_dev *input;
struct input_dev *pad_input;
+ bool input_registered;
int pid;
int battery_capacity;
int num_contacts_left;
int ps_connected;
u8 bt_features;
u8 bt_high_speed;
+ struct hid_data hid_data;
};
#endif
static void cleanup_domain(struct protection_domain *domain)
{
- struct iommu_dev_data *dev_data, *next;
+ struct iommu_dev_data *entry;
unsigned long flags;
write_lock_irqsave(&amd_iommu_devtable_lock, flags);
- list_for_each_entry_safe(dev_data, next, &domain->dev_list, list) {
- __detach_device(dev_data);
- atomic_set(&dev_data->bind, 0);
+ while (!list_empty(&domain->dev_list)) {
+ entry = list_first_entry(&domain->dev_list,
+ struct iommu_dev_data, list);
+ __detach_device(entry);
+ atomic_set(&entry->bind, 0);
}
write_unlock_irqrestore(&amd_iommu_devtable_lock, flags);
action != BUS_NOTIFY_DEL_DEVICE)
return 0;
+ /*
+ * If the device is still attached to a device driver we can't
+ * tear down the domain yet as DMA mappings may still be in use.
+ * Wait for the BUS_NOTIFY_UNBOUND_DRIVER event to do that.
+ */
+ if (action == BUS_NOTIFY_DEL_DEVICE && dev->driver != NULL)
+ return 0;
+
domain = find_domain(dev);
if (!domain)
return 0;
size_t orig_size = size;
int ret = 0;
- if (unlikely(domain->ops->unmap == NULL ||
+ if (unlikely(domain->ops->map == NULL ||
domain->ops->pgsize_bitmap == 0UL))
return -ENODEV;
/* $Id: xdi_msg.h,v 1.1.2.2 2001/02/16 08:40:36 armin Exp $ */
-#ifndef __DIVA_XDI_UM_CFG_MESSSGE_H__
+#ifndef __DIVA_XDI_UM_CFG_MESSAGE_H__
#define __DIVA_XDI_UM_CFG_MESSAGE_H__
/*
priv->raminit_ctrlreg = devm_ioremap(&pdev->dev, res->start,
resource_size(res));
- if (IS_ERR(priv->raminit_ctrlreg) || priv->instance < 0)
+ if (!priv->raminit_ctrlreg || priv->instance < 0)
dev_info(&pdev->dev, "control memory is not used for raminit\n");
else
priv->raminit = c_can_hw_raminit_ti;
/* process state changes depending on the new state */
switch (new_state) {
+ case CAN_STATE_ERROR_WARNING:
+ netdev_dbg(dev, "Error Warning\n");
+ cf->can_id |= CAN_ERR_CRTL;
+ cf->data[1] = (bec.txerr > bec.rxerr) ?
+ CAN_ERR_CRTL_TX_WARNING :
+ CAN_ERR_CRTL_RX_WARNING;
+ break;
case CAN_STATE_ERROR_ACTIVE:
netdev_dbg(dev, "Error Active\n");
cf->can_id |= CAN_ERR_PROT;
if (priv->devtype_data->features & FLEXCAN_HAS_BROKEN_ERR_STATE ||
priv->can.ctrlmode & CAN_CTRLMODE_BERR_REPORTING)
reg_ctrl |= FLEXCAN_CTRL_ERR_MSK;
+ else
+ reg_ctrl &= ~FLEXCAN_CTRL_ERR_MSK;
/* save for later use */
priv->reg_ctrl_default = reg_ctrl;
netdev_err(dev, "setting SJA1000 into normal mode failed!\n");
}
+/*
+ * initialize SJA1000 chip:
+ * - reset chip
+ * - set output mode
+ * - set baudrate
+ * - enable interrupts
+ * - start operating mode
+ */
+static void chipset_init(struct net_device *dev)
+{
+ struct sja1000_priv *priv = netdev_priv(dev);
+
+ /* set clock divider and output control register */
+ priv->write_reg(priv, SJA1000_CDR, priv->cdr | CDR_PELICAN);
+
+ /* set acceptance filter (accept all) */
+ priv->write_reg(priv, SJA1000_ACCC0, 0x00);
+ priv->write_reg(priv, SJA1000_ACCC1, 0x00);
+ priv->write_reg(priv, SJA1000_ACCC2, 0x00);
+ priv->write_reg(priv, SJA1000_ACCC3, 0x00);
+
+ priv->write_reg(priv, SJA1000_ACCM0, 0xFF);
+ priv->write_reg(priv, SJA1000_ACCM1, 0xFF);
+ priv->write_reg(priv, SJA1000_ACCM2, 0xFF);
+ priv->write_reg(priv, SJA1000_ACCM3, 0xFF);
+
+ priv->write_reg(priv, SJA1000_OCR, priv->ocr | OCR_MODE_NORMAL);
+}
+
static void sja1000_start(struct net_device *dev)
{
struct sja1000_priv *priv = netdev_priv(dev);
if (priv->can.state != CAN_STATE_STOPPED)
set_reset_mode(dev);
+ /* Initialize chip if uninitialized at this stage */
+ if (!(priv->read_reg(priv, SJA1000_CDR) & CDR_PELICAN))
+ chipset_init(dev);
+
/* Clear error counters and error code capture */
priv->write_reg(priv, SJA1000_TXERR, 0x0);
priv->write_reg(priv, SJA1000_RXERR, 0x0);
}
/*
- * initialize SJA1000 chip:
- * - reset chip
- * - set output mode
- * - set baudrate
- * - enable interrupts
- * - start operating mode
- */
-static void chipset_init(struct net_device *dev)
-{
- struct sja1000_priv *priv = netdev_priv(dev);
-
- /* set clock divider and output control register */
- priv->write_reg(priv, SJA1000_CDR, priv->cdr | CDR_PELICAN);
-
- /* set acceptance filter (accept all) */
- priv->write_reg(priv, SJA1000_ACCC0, 0x00);
- priv->write_reg(priv, SJA1000_ACCC1, 0x00);
- priv->write_reg(priv, SJA1000_ACCC2, 0x00);
- priv->write_reg(priv, SJA1000_ACCC3, 0x00);
-
- priv->write_reg(priv, SJA1000_ACCM0, 0xFF);
- priv->write_reg(priv, SJA1000_ACCM1, 0xFF);
- priv->write_reg(priv, SJA1000_ACCM2, 0xFF);
- priv->write_reg(priv, SJA1000_ACCM3, 0xFF);
-
- priv->write_reg(priv, SJA1000_OCR, priv->ocr | OCR_MODE_NORMAL);
-}
-
-/*
* transmit a CAN message
* message layout in the sk_buff should be like this:
* xx xx xx xx ff ll 00 11 22 33 44 55 66 77
struct xgene_enet_desc_ring *ring;
ring = pdata->tx_ring;
- if (ring && ring->cp_ring && ring->cp_ring->cp_skb)
- devm_kfree(dev, ring->cp_ring->cp_skb);
- xgene_enet_free_desc_ring(ring);
+ if (ring) {
+ if (ring->cp_ring && ring->cp_ring->cp_skb)
+ devm_kfree(dev, ring->cp_ring->cp_skb);
+ xgene_enet_free_desc_ring(ring);
+ }
ring = pdata->rx_ring;
- if (ring && ring->buf_pool && ring->buf_pool->rx_skb)
- devm_kfree(dev, ring->buf_pool->rx_skb);
- xgene_enet_free_desc_ring(ring->buf_pool);
- xgene_enet_free_desc_ring(ring);
+ if (ring) {
+ if (ring->buf_pool) {
+ if (ring->buf_pool->rx_skb)
+ devm_kfree(dev, ring->buf_pool->rx_skb);
+ xgene_enet_free_desc_ring(ring->buf_pool);
+ }
+ xgene_enet_free_desc_ring(ring);
+ }
}
static struct xgene_enet_desc_ring *xgene_enet_create_desc_ring(
#ifdef BNX2X_STOP_ON_ERROR
fp->tpa_queue_used |= (1 << queue);
-#ifdef _ASM_GENERIC_INT_L64_H
- DP(NETIF_MSG_RX_STATUS, "fp->tpa_queue_used = 0x%lx\n",
-#else
DP(NETIF_MSG_RX_STATUS, "fp->tpa_queue_used = 0x%llx\n",
-#endif
fp->tpa_queue_used);
#endif
}
}
#define BNX2X_PREV_UNDI_PROD_ADDR(p) (BAR_TSTRORM_INTMEM + 0x1508 + ((p) << 4))
+#define BNX2X_PREV_UNDI_PROD_ADDR_H(f) (BAR_TSTRORM_INTMEM + \
+ 0x1848 + ((f) << 4))
#define BNX2X_PREV_UNDI_RCQ(val) ((val) & 0xffff)
#define BNX2X_PREV_UNDI_BD(val) ((val) >> 16 & 0xffff)
#define BNX2X_PREV_UNDI_PROD(rcq, bd) ((bd) << 16 | (rcq))
#define BCM_5710_UNDI_FW_MF_MAJOR (0x07)
#define BCM_5710_UNDI_FW_MF_MINOR (0x08)
#define BCM_5710_UNDI_FW_MF_VERS (0x05)
-#define BNX2X_PREV_UNDI_MF_PORT(p) (BAR_TSTRORM_INTMEM + 0x150c + ((p) << 4))
-#define BNX2X_PREV_UNDI_MF_FUNC(f) (BAR_TSTRORM_INTMEM + 0x184c + ((f) << 4))
static bool bnx2x_prev_is_after_undi(struct bnx2x *bp)
{
return false;
}
-static bool bnx2x_prev_unload_undi_fw_supports_mf(struct bnx2x *bp)
-{
- u8 major, minor, version;
- u32 fw;
-
- /* Must check that FW is loaded */
- if (!(REG_RD(bp, MISC_REG_RESET_REG_1) &
- MISC_REGISTERS_RESET_REG_1_RST_XSEM)) {
- BNX2X_DEV_INFO("XSEM is reset - UNDI MF FW is not loaded\n");
- return false;
- }
-
- /* Read Currently loaded FW version */
- fw = REG_RD(bp, XSEM_REG_PRAM);
- major = fw & 0xff;
- minor = (fw >> 0x8) & 0xff;
- version = (fw >> 0x10) & 0xff;
- BNX2X_DEV_INFO("Loaded FW: 0x%08x: Major 0x%02x Minor 0x%02x Version 0x%02x\n",
- fw, major, minor, version);
-
- if (major > BCM_5710_UNDI_FW_MF_MAJOR)
- return true;
-
- if ((major == BCM_5710_UNDI_FW_MF_MAJOR) &&
- (minor > BCM_5710_UNDI_FW_MF_MINOR))
- return true;
-
- if ((major == BCM_5710_UNDI_FW_MF_MAJOR) &&
- (minor == BCM_5710_UNDI_FW_MF_MINOR) &&
- (version >= BCM_5710_UNDI_FW_MF_VERS))
- return true;
-
- return false;
-}
-
-static void bnx2x_prev_unload_undi_mf(struct bnx2x *bp)
-{
- int i;
-
- /* Due to legacy (FW) code, the first function on each engine has a
- * different offset macro from the rest of the functions.
- * Setting this for all 8 functions is harmless regardless of whether
- * this is actually a multi-function device.
- */
- for (i = 0; i < 2; i++)
- REG_WR(bp, BNX2X_PREV_UNDI_MF_PORT(i), 1);
-
- for (i = 2; i < 8; i++)
- REG_WR(bp, BNX2X_PREV_UNDI_MF_FUNC(i - 2), 1);
-
- BNX2X_DEV_INFO("UNDI FW (MF) set to discard\n");
-}
-
-static void bnx2x_prev_unload_undi_inc(struct bnx2x *bp, u8 port, u8 inc)
+static void bnx2x_prev_unload_undi_inc(struct bnx2x *bp, u8 inc)
{
u16 rcq, bd;
- u32 tmp_reg = REG_RD(bp, BNX2X_PREV_UNDI_PROD_ADDR(port));
+ u32 addr, tmp_reg;
+ if (BP_FUNC(bp) < 2)
+ addr = BNX2X_PREV_UNDI_PROD_ADDR(BP_PORT(bp));
+ else
+ addr = BNX2X_PREV_UNDI_PROD_ADDR_H(BP_FUNC(bp) - 2);
+
+ tmp_reg = REG_RD(bp, addr);
rcq = BNX2X_PREV_UNDI_RCQ(tmp_reg) + inc;
bd = BNX2X_PREV_UNDI_BD(tmp_reg) + inc;
tmp_reg = BNX2X_PREV_UNDI_PROD(rcq, bd);
- REG_WR(bp, BNX2X_PREV_UNDI_PROD_ADDR(port), tmp_reg);
+ REG_WR(bp, addr, tmp_reg);
- BNX2X_DEV_INFO("UNDI producer [%d] rings bd -> 0x%04x, rcq -> 0x%04x\n",
- port, bd, rcq);
+ BNX2X_DEV_INFO("UNDI producer [%d/%d][%08x] rings bd -> 0x%04x, rcq -> 0x%04x\n",
+ BP_PORT(bp), BP_FUNC(bp), addr, bd, rcq);
}
static int bnx2x_prev_mcp_done(struct bnx2x *bp)
/* Reset should be performed after BRB is emptied */
if (reset_reg & MISC_REGISTERS_RESET_REG_1_RST_BRB1) {
u32 timer_count = 1000;
- bool need_write = true;
/* Close the MAC Rx to prevent BRB from filling up */
bnx2x_prev_unload_close_mac(bp, &mac_vals);
else
timer_count--;
- /* New UNDI FW supports MF and contains better
- * cleaning methods - might be redundant but harmless.
- */
- if (bnx2x_prev_unload_undi_fw_supports_mf(bp)) {
- if (need_write) {
- bnx2x_prev_unload_undi_mf(bp);
- need_write = false;
- }
- } else if (prev_undi) {
- /* If UNDI resides in memory,
- * manually increment it
- */
- bnx2x_prev_unload_undi_inc(bp, BP_PORT(bp), 1);
- }
+ /* If UNDI resides in memory, manually increment it */
+ if (prev_undi)
+ bnx2x_prev_unload_undi_inc(bp, 1);
+
udelay(10);
}
struct tid_info tids;
void **tid_release_head;
spinlock_t tid_release_lock;
+ struct workqueue_struct *workq;
struct work_struct tid_release_task;
struct work_struct db_full_task;
struct work_struct db_drop_task;
return ret;
}
-static struct workqueue_struct *workq;
-
/**
* link_start - enable a port
* @dev: the port to enable
adap->tid_release_head = (void **)((uintptr_t)p | chan);
if (!adap->tid_release_task_busy) {
adap->tid_release_task_busy = true;
- queue_work(workq, &adap->tid_release_task);
+ queue_work(adap->workq, &adap->tid_release_task);
}
spin_unlock_bh(&adap->tid_release_lock);
}
notify_rdma_uld(adap, CXGB4_CONTROL_DB_FULL);
t4_set_reg_field(adap, SGE_INT_ENABLE3,
DBFIFO_HP_INT | DBFIFO_LP_INT, 0);
- queue_work(workq, &adap->db_full_task);
+ queue_work(adap->workq, &adap->db_full_task);
}
}
disable_dbs(adap);
notify_rdma_uld(adap, CXGB4_CONTROL_DB_FULL);
}
- queue_work(workq, &adap->db_drop_task);
+ queue_work(adap->workq, &adap->db_drop_task);
}
static void uld_attach(struct adapter *adap, unsigned int uld)
goto out_disable_device;
}
+ adapter->workq = create_singlethread_workqueue("cxgb4");
+ if (!adapter->workq) {
+ err = -ENOMEM;
+ goto out_free_adapter;
+ }
+
/* PCI device has been enabled */
adapter->flags |= DEV_ENABLED;
out_unmap_bar0:
iounmap(adapter->regs);
out_free_adapter:
+ if (adapter->workq)
+ destroy_workqueue(adapter->workq);
+
kfree(adapter);
out_disable_device:
pci_disable_pcie_error_reporting(pdev);
if (adapter) {
int i;
+ /* Tear down per-adapter Work Queue first since it can contain
+ * references to our adapter data structure.
+ */
+ destroy_workqueue(adapter->workq);
+
if (is_offload(adapter))
detach_ulds(adapter);
{
int ret;
- workq = create_singlethread_workqueue("cxgb4");
- if (!workq)
- return -ENOMEM;
-
/* Debugfs support is optional, just warn if this fails */
cxgb4_debugfs_root = debugfs_create_dir(KBUILD_MODNAME, NULL);
if (!cxgb4_debugfs_root)
pr_warn("could not create debugfs entry, continuing\n");
ret = pci_register_driver(&cxgb4_driver);
- if (ret < 0) {
+ if (ret < 0)
debugfs_remove(cxgb4_debugfs_root);
- destroy_workqueue(workq);
- }
register_inet6addr_notifier(&cxgb4_inet6addr_notifier);
unregister_inet6addr_notifier(&cxgb4_inet6addr_notifier);
pci_unregister_driver(&cxgb4_driver);
debugfs_remove(cxgb4_debugfs_root); /* NULL ok */
- flush_workqueue(workq);
- destroy_workqueue(workq);
}
module_init(cxgb4_init_module);
FW_EQ_ETH_CMD_PFN(adap->fn) | FW_EQ_ETH_CMD_VFN(0));
c.alloc_to_len16 = htonl(FW_EQ_ETH_CMD_ALLOC |
FW_EQ_ETH_CMD_EQSTART | FW_LEN16(c));
- c.viid_pkd = htonl(FW_EQ_ETH_CMD_VIID(pi->viid));
+ c.viid_pkd = htonl(FW_EQ_ETH_CMD_AUTOEQUEQE |
+ FW_EQ_ETH_CMD_VIID(pi->viid));
c.fetchszm_to_iqid = htonl(FW_EQ_ETH_CMD_HOSTFCMODE(2) |
FW_EQ_ETH_CMD_PCIECHN(pi->tx_chan) |
FW_EQ_ETH_CMD_FETCHRO(1) |
#define FW_EQ_ETH_CMD_CIDXFTHRESH(x) ((x) << 16)
#define FW_EQ_ETH_CMD_EQSIZE(x) ((x) << 0)
+#define FW_EQ_ETH_CMD_AUTOEQUEQE (1U << 30)
#define FW_EQ_ETH_CMD_VIID(x) ((x) << 16)
struct fw_eq_ctrl_cmd {
cmd.alloc_to_len16 = cpu_to_be32(FW_EQ_ETH_CMD_ALLOC |
FW_EQ_ETH_CMD_EQSTART |
FW_LEN16(cmd));
- cmd.viid_pkd = cpu_to_be32(FW_EQ_ETH_CMD_VIID(pi->viid));
+ cmd.viid_pkd = cpu_to_be32(FW_EQ_ETH_CMD_AUTOEQUEQE |
+ FW_EQ_ETH_CMD_VIID(pi->viid));
cmd.fetchszm_to_iqid =
cpu_to_be32(FW_EQ_ETH_CMD_HOSTFCMODE(SGE_HOSTFCMODE_STPG) |
FW_EQ_ETH_CMD_PCIECHN(pi->port_id) |
struct clk *clk_enet_out;
struct clk *clk_ptp;
+ bool ptp_clk_on;
+ struct mutex ptp_clk_mutex;
+
/* The saved address of a sent-in-place packet/buffer, for skfree(). */
unsigned char *tx_bounce[TX_RING_SIZE];
struct sk_buff *tx_skbuff[TX_RING_SIZE];
u32 cycle_speed;
int hwts_rx_en;
int hwts_tx_en;
- struct timer_list time_keep;
+ struct delayed_work time_keep;
struct regulator *reg_phy;
};
goto failed_clk_enet_out;
}
if (fep->clk_ptp) {
+ mutex_lock(&fep->ptp_clk_mutex);
ret = clk_prepare_enable(fep->clk_ptp);
- if (ret)
+ if (ret) {
+ mutex_unlock(&fep->ptp_clk_mutex);
goto failed_clk_ptp;
+ } else {
+ fep->ptp_clk_on = true;
+ }
+ mutex_unlock(&fep->ptp_clk_mutex);
}
} else {
clk_disable_unprepare(fep->clk_ahb);
clk_disable_unprepare(fep->clk_ipg);
if (fep->clk_enet_out)
clk_disable_unprepare(fep->clk_enet_out);
- if (fep->clk_ptp)
+ if (fep->clk_ptp) {
+ mutex_lock(&fep->ptp_clk_mutex);
clk_disable_unprepare(fep->clk_ptp);
+ fep->ptp_clk_on = false;
+ mutex_unlock(&fep->ptp_clk_mutex);
+ }
}
return 0;
if (IS_ERR(fep->clk_enet_out))
fep->clk_enet_out = NULL;
+ fep->ptp_clk_on = false;
+ mutex_init(&fep->ptp_clk_mutex);
fep->clk_ptp = devm_clk_get(&pdev->dev, "ptp");
fep->bufdesc_ex =
pdev->id_entry->driver_data & FEC_QUIRK_HAS_BUFDESC_EX;
struct net_device *ndev = platform_get_drvdata(pdev);
struct fec_enet_private *fep = netdev_priv(ndev);
+ cancel_delayed_work_sync(&fep->time_keep);
cancel_work_sync(&fep->tx_timeout_work);
unregister_netdev(ndev);
fec_enet_mii_remove(fep);
- del_timer_sync(&fep->time_keep);
if (fep->reg_phy)
regulator_disable(fep->reg_phy);
if (fep->ptp_clock)
u64 ns;
unsigned long flags;
+ mutex_lock(&fep->ptp_clk_mutex);
+ /* Check the ptp clock */
+ if (!fep->ptp_clk_on) {
+ mutex_unlock(&fep->ptp_clk_mutex);
+ return -EINVAL;
+ }
+
ns = ts->tv_sec * 1000000000ULL;
ns += ts->tv_nsec;
spin_lock_irqsave(&fep->tmreg_lock, flags);
timecounter_init(&fep->tc, &fep->cc, ns);
spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+ mutex_unlock(&fep->ptp_clk_mutex);
return 0;
}
* fec_time_keep - call timecounter_read every second to avoid timer overrun
* because ENET just support 32bit counter, will timeout in 4s
*/
-static void fec_time_keep(unsigned long _data)
+static void fec_time_keep(struct work_struct *work)
{
- struct fec_enet_private *fep = (struct fec_enet_private *)_data;
+ struct delayed_work *dwork = to_delayed_work(work);
+ struct fec_enet_private *fep = container_of(dwork, struct fec_enet_private, time_keep);
u64 ns;
unsigned long flags;
- spin_lock_irqsave(&fep->tmreg_lock, flags);
- ns = timecounter_read(&fep->tc);
- spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+ mutex_lock(&fep->ptp_clk_mutex);
+ if (fep->ptp_clk_on) {
+ spin_lock_irqsave(&fep->tmreg_lock, flags);
+ ns = timecounter_read(&fep->tc);
+ spin_unlock_irqrestore(&fep->tmreg_lock, flags);
+ }
+ mutex_unlock(&fep->ptp_clk_mutex);
- mod_timer(&fep->time_keep, jiffies + HZ);
+ schedule_delayed_work(&fep->time_keep, HZ);
}
/**
fec_ptp_start_cyclecounter(ndev);
- init_timer(&fep->time_keep);
- fep->time_keep.data = (unsigned long)fep;
- fep->time_keep.function = fec_time_keep;
- fep->time_keep.expires = jiffies + HZ;
- add_timer(&fep->time_keep);
+ INIT_DELAYED_WORK(&fep->time_keep, fec_time_keep);
fep->ptp_clock = ptp_clock_register(&fep->ptp_caps, &pdev->dev);
if (IS_ERR(fep->ptp_clock)) {
fep->ptp_clock = NULL;
pr_err("ptp_clock_register failed\n");
}
+
+ schedule_delayed_work(&fep->time_keep, HZ);
}
atomic_add(buffers_added, &(pool->available));
}
+/*
+ * The final 8 bytes of the buffer list is a counter of frames dropped
+ * because there was not a buffer in the buffer list capable of holding
+ * the frame.
+ */
+static void ibmveth_update_rx_no_buffer(struct ibmveth_adapter *adapter)
+{
+ __be64 *p = adapter->buffer_list_addr + 4096 - 8;
+
+ adapter->rx_no_buffer = be64_to_cpup(p);
+}
+
/* replenish routine */
static void ibmveth_replenish_task(struct ibmveth_adapter *adapter)
{
ibmveth_replenish_buffer_pool(adapter, pool);
}
- adapter->rx_no_buffer = *(u64 *)(((char*)adapter->buffer_list_addr) +
- 4096 - 8);
+ ibmveth_update_rx_no_buffer(adapter);
}
/* empty and free ana buffer pool - also used to do cleanup in error paths */
free_irq(netdev->irq, netdev);
- adapter->rx_no_buffer = *(u64 *)(((char *)adapter->buffer_list_addr) +
- 4096 - 8);
+ ibmveth_update_rx_no_buffer(adapter);
ibmveth_cleanup(adapter);
u32 prttsyn_stat;
int n;
- if (pf->flags & I40E_FLAG_PTP)
+ if (!(pf->flags & I40E_FLAG_PTP))
return;
prttsyn_stat = rd32(hw, I40E_PRTTSYN_STAT_1);
static int i40e_vc_send_msg_to_vf(struct i40e_vf *vf, u32 v_opcode,
u32 v_retval, u8 *msg, u16 msglen)
{
- struct i40e_pf *pf = vf->pf;
- struct i40e_hw *hw = &pf->hw;
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ struct i40e_pf *pf;
+ struct i40e_hw *hw;
+ int abs_vf_id;
i40e_status aq_ret;
+ /* validate the request */
+ if (!vf || vf->vf_id >= vf->pf->num_alloc_vfs)
+ return -EINVAL;
+
+ pf = vf->pf;
+ hw = &pf->hw;
+ abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+
/* single place to detect unsuccessful return values */
if (v_retval) {
vf->num_invalid_msgs++;
{
struct i40e_hw *hw = &pf->hw;
struct i40e_vf *vf = pf->vf;
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
int i;
- for (i = 0; i < pf->num_alloc_vfs; i++) {
+ for (i = 0; i < pf->num_alloc_vfs; i++, vf++) {
+ int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
+ /* Not all vfs are enabled so skip the ones that are not */
+ if (!test_bit(I40E_VF_STAT_INIT, &vf->vf_states) &&
+ !test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
+ continue;
+
/* Ignore return value on purpose - a given VF may fail, but
* we need to keep going and send to all of them
*/
i40e_aq_send_msg_to_vf(hw, abs_vf_id, v_opcode, v_retval,
msg, msglen, NULL);
- vf++;
- abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
}
}
struct i40e_hw *hw = &pf->hw;
struct i40e_vf *vf = pf->vf;
struct i40e_link_status *ls = &pf->hw.phy.link_info;
- int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
int i;
pfe.event = I40E_VIRTCHNL_EVENT_LINK_CHANGE;
pfe.severity = I40E_PF_EVENT_SEVERITY_INFO;
- for (i = 0; i < pf->num_alloc_vfs; i++) {
+ for (i = 0; i < pf->num_alloc_vfs; i++, vf++) {
+ int abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
if (vf->link_forced) {
pfe.event_data.link_event.link_status = vf->link_up;
pfe.event_data.link_event.link_speed =
i40e_aq_send_msg_to_vf(hw, abs_vf_id, I40E_VIRTCHNL_OP_EVENT,
0, (u8 *)&pfe, sizeof(pfe),
NULL);
- vf++;
- abs_vf_id = vf->vf_id + hw->func_caps.vf_base_id;
}
}
void i40e_vc_notify_vf_reset(struct i40e_vf *vf)
{
struct i40e_virtchnl_pf_event pfe;
- int abs_vf_id = vf->vf_id + vf->pf->hw.func_caps.vf_base_id;
+ int abs_vf_id;
+
+ /* validate the request */
+ if (!vf || vf->vf_id >= vf->pf->num_alloc_vfs)
+ return;
+
+ /* verify if the VF is in either init or active before proceeding */
+ if (!test_bit(I40E_VF_STAT_INIT, &vf->vf_states) &&
+ !test_bit(I40E_VF_STAT_ACTIVE, &vf->vf_states))
+ return;
+
+ abs_vf_id = vf->vf_id + vf->pf->hw.func_caps.vf_base_id;
pfe.event = I40E_VIRTCHNL_EVENT_RESET_IMPENDING;
pfe.severity = I40E_PF_EVENT_SEVERITY_CERTAIN_DOOM;
u16 cksum;
u16 unused;
u8 model[16];
- u16 mfg_id;
+ u8 mfg_id;
u16 id;
u8 flag;
u8 erase_cmd;
return QLC_DEFAULT_VNIC_COUNT;
}
+static inline void qlcnic_swap32_buffer(u32 *buffer, int count)
+{
+#if defined(__BIG_ENDIAN)
+ u32 *tmp = buffer;
+ int i;
+
+ for (i = 0; i < count; i++) {
+ *tmp = swab32(*tmp);
+ tmp++;
+ }
+#endif
+}
+
#ifdef CONFIG_QLCNIC_HWMON
void qlcnic_register_hwmon_dev(struct qlcnic_adapter *);
void qlcnic_unregister_hwmon_dev(struct qlcnic_adapter *);
}
qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_DIRECT_WINDOW,
- (addr));
+ (addr & 0xFFFF0000));
range = flash_offset + (count * sizeof(u32));
/* Check if data is spread across multiple sectors */
ret = qlcnic_83xx_lockless_flash_read32(adapter, QLCNIC_FDT_LOCATION,
(u8 *)&adapter->ahw->fdt,
count);
-
+ qlcnic_swap32_buffer((u32 *)&adapter->ahw->fdt, count);
qlcnic_83xx_unlock_flash(adapter);
return ret;
}
addr1 = (sector_start_addr & 0xFF) << 16;
addr2 = (sector_start_addr & 0xFF0000) >> 16;
- reversed_addr = addr1 | addr2;
+ reversed_addr = addr1 | addr2 | (sector_start_addr & 0xFF00);
qlcnic_83xx_wrt_reg_indirect(adapter, QLC_83XX_FLASH_WRDATA,
reversed_addr);
{
struct qlc_83xx_fw_info *fw_info = adapter->ahw->fw_info;
const struct firmware *fw = fw_info->fw;
- u32 dest, *p_cache;
+ u32 dest, *p_cache, *temp;
int i, ret = -EIO;
+ __le32 *temp_le;
u8 data[16];
size_t size;
u64 addr;
+ temp = kzalloc(fw->size, GFP_KERNEL);
+ if (!temp) {
+ release_firmware(fw);
+ fw_info->fw = NULL;
+ return -ENOMEM;
+ }
+
+ temp_le = (__le32 *)fw->data;
+
+ /* FW image in file is in little endian, swap the data to nullify
+ * the effect of writel() operation on big endian platform.
+ */
+ for (i = 0; i < fw->size / sizeof(u32); i++)
+ temp[i] = __le32_to_cpu(temp_le[i]);
+
dest = QLCRDX(adapter->ahw, QLCNIC_FW_IMAGE_ADDR);
size = (fw->size & ~0xF);
- p_cache = (u32 *)fw->data;
+ p_cache = temp;
addr = (u64)dest;
ret = qlcnic_ms_mem_write128(adapter, addr,
p_cache, size / 16);
if (ret) {
dev_err(&adapter->pdev->dev, "MS memory write failed\n");
- release_firmware(fw);
- fw_info->fw = NULL;
- return -EIO;
+ goto exit;
}
/* alignment check */
if (fw->size & 0xF) {
addr = dest + size;
for (i = 0; i < (fw->size & 0xF); i++)
- data[i] = fw->data[size + i];
+ data[i] = temp[size + i];
for (; i < 16; i++)
data[i] = 0;
ret = qlcnic_ms_mem_write128(adapter, addr,
if (ret) {
dev_err(&adapter->pdev->dev,
"MS memory write failed\n");
- release_firmware(fw);
- fw_info->fw = NULL;
- return -EIO;
+ goto exit;
}
}
+
+exit:
release_firmware(fw);
fw_info->fw = NULL;
+ kfree(temp);
- return 0;
+ return ret;
}
static void qlcnic_83xx_dump_pause_control_regs(struct qlcnic_adapter *adapter)
u32 type;
u32 offset;
u32 cap_size;
+#if defined(__LITTLE_ENDIAN)
u8 mask;
u8 rsvd[2];
u8 flags;
+#else
+ u8 flags;
+ u8 rsvd[2];
+ u8 mask;
+#endif
} __packed;
struct __crb {
u32 addr;
+#if defined(__LITTLE_ENDIAN)
u8 stride;
u8 rsvd1[3];
+#else
+ u8 rsvd1[3];
+ u8 stride;
+#endif
u32 data_size;
u32 no_ops;
u32 rsvd2[4];
struct __ctrl {
u32 addr;
+#if defined(__LITTLE_ENDIAN)
u8 stride;
u8 index_a;
u16 timeout;
+#else
+ u16 timeout;
+ u8 index_a;
+ u8 stride;
+#endif
u32 data_size;
u32 no_ops;
+#if defined(__LITTLE_ENDIAN)
u8 opcode;
u8 index_v;
u8 shl_val;
u8 shr_val;
+#else
+ u8 shr_val;
+ u8 shl_val;
+ u8 index_v;
+ u8 opcode;
+#endif
u32 val1;
u32 val2;
u32 val3;
struct __cache {
u32 addr;
+#if defined(__LITTLE_ENDIAN)
u16 stride;
u16 init_tag_val;
+#else
+ u16 init_tag_val;
+ u16 stride;
+#endif
u32 size;
u32 no_ops;
u32 ctrl_addr;
u32 ctrl_val;
u32 read_addr;
+#if defined(__LITTLE_ENDIAN)
u8 read_addr_stride;
u8 read_addr_num;
u8 rsvd1[2];
+#else
+ u8 rsvd1[2];
+ u8 read_addr_num;
+ u8 read_addr_stride;
+#endif
} __packed;
struct __ocm {
struct __queue {
u32 sel_addr;
+#if defined(__LITTLE_ENDIAN)
u16 stride;
u8 rsvd[2];
+#else
+ u8 rsvd[2];
+ u16 stride;
+#endif
u32 size;
u32 no_ops;
u8 rsvd2[8];
u32 read_addr;
+#if defined(__LITTLE_ENDIAN)
u8 read_addr_stride;
u8 read_addr_cnt;
u8 rsvd3[2];
+#else
+ u8 rsvd3[2];
+ u8 read_addr_cnt;
+ u8 read_addr_stride;
+#endif
} __packed;
struct __pollrd {
u32 sel_addr;
u32 read_addr;
u32 sel_val;
+#if defined(__LITTLE_ENDIAN)
u16 sel_val_stride;
u16 no_ops;
+#else
+ u16 no_ops;
+ u16 sel_val_stride;
+#endif
u32 poll_wait;
u32 poll_mask;
u32 data_size;
u32 no_ops;
u32 sel_val_mask;
u32 read_addr;
+#if defined(__LITTLE_ENDIAN)
u8 sel_val_stride;
u8 data_size;
u8 rsvd[2];
+#else
+ u8 rsvd[2];
+ u8 data_size;
+ u8 sel_val_stride;
+#endif
} __packed;
struct __pollrdmwr {
if (ret != 0)
return ret;
qlcnic_read_crb(adapter, buf, offset, size);
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (ret != 0)
return ret;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
qlcnic_write_crb(adapter, buf, offset, size);
return size;
}
return -EIO;
memcpy(buf, &data, size);
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (ret != 0)
return ret;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
memcpy(&data, buf, size);
if (qlcnic_pci_mem_write_2M(adapter, offset, data))
if (rem)
return QL_STATUS_INVALID_PARAM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
pm_cfg = (struct qlcnic_pm_func_cfg *)buf;
ret = validate_pm_config(adapter, pm_cfg, count);
pm_cfg[pci_func].dest_npar = 0;
pm_cfg[pci_func].pci_func = i;
}
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (rem)
return QL_STATUS_INVALID_PARAM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
esw_cfg = (struct qlcnic_esw_func_cfg *)buf;
ret = validate_esw_config(adapter, esw_cfg, count);
if (ret)
if (qlcnic_get_eswitch_port_config(adapter, &esw_cfg[pci_func]))
return QL_STATUS_INVALID_PARAM;
}
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
if (rem)
return QL_STATUS_INVALID_PARAM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
np_cfg = (struct qlcnic_npar_func_cfg *)buf;
ret = validate_npar_config(adapter, np_cfg, count);
if (ret)
np_cfg[pci_func].max_tx_queues = nic_info.max_tx_ques;
np_cfg[pci_func].max_rx_queues = nic_info.max_rx_ques;
}
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
return size;
}
pci_cfg = (struct qlcnic_pci_func_cfg *)buf;
count = size / sizeof(struct qlcnic_pci_func_cfg);
+ qlcnic_swap32_buffer((u32 *)pci_info, size / sizeof(u32));
for (i = 0; i < count; i++) {
pci_cfg[i].pci_func = pci_info[i].id;
pci_cfg[i].func_type = pci_info[i].type;
}
qlcnic_83xx_unlock_flash(adapter);
+ qlcnic_swap32_buffer((u32 *)p_read_buf, count);
memcpy(buf, p_read_buf, size);
kfree(p_read_buf);
if (!p_cache)
return -ENOMEM;
+ count = size / sizeof(u32);
+ qlcnic_swap32_buffer((u32 *)buf, count);
memcpy(p_cache, buf, size);
p_src = p_cache;
- count = size / sizeof(u32);
if (qlcnic_83xx_lock_flash(adapter) != 0) {
kfree(p_cache);
if (!p_cache)
return -ENOMEM;
+ qlcnic_swap32_buffer((u32 *)buf, size / sizeof(u32));
memcpy(p_cache, buf, size);
p_src = p_cache;
count = size / sizeof(u32);
struct macvlan_dev *vlan = netdev_priv(dev);
int err = -EINVAL;
- if (!vlan->port->passthru)
+ /* Support unicast filter only on passthru devices.
+ * Multicast filter should be allowed on all devices.
+ */
+ if (!vlan->port->passthru && is_unicast_ether_addr(addr))
return -EOPNOTSUPP;
if (flags & NLM_F_REPLACE)
struct macvlan_dev *vlan = netdev_priv(dev);
int err = -EINVAL;
- if (!vlan->port->passthru)
+ /* Support unicast filter only on passthru devices.
+ * Multicast filter should be allowed on all devices.
+ */
+ if (!vlan->port->passthru && is_unicast_ether_addr(addr))
return -EOPNOTSUPP;
if (is_unicast_ether_addr(addr))
return bcm7xxx_28nm_afe_config_init(phydev);
}
+static int bcm7xxx_28nm_resume(struct phy_device *phydev)
+{
+ int ret;
+
+ /* Re-apply workarounds coming out suspend/resume */
+ ret = bcm7xxx_28nm_config_init(phydev);
+ if (ret)
+ return ret;
+
+ /* 28nm Gigabit PHYs come out of reset without any half-duplex
+ * or "hub" compliant advertised mode, fix that. This does not
+ * cause any problems with the PHY library since genphy_config_aneg()
+ * gracefully handles auto-negotiated and forced modes.
+ */
+ return genphy_config_aneg(phydev);
+}
+
static int phy_set_clr_bits(struct phy_device *dev, int location,
int set_mask, int clr_mask)
{
}
/* Workaround for putting the PHY in IDDQ mode, required
- * for all BCM7XXX PHYs
+ * for all BCM7XXX 40nm and 65nm PHYs
*/
static int bcm7xxx_suspend(struct phy_device *phydev)
{
.config_init = bcm7xxx_28nm_afe_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_afe_config_init,
+ .resume = bcm7xxx_28nm_resume,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_ID_BCM7439,
.config_init = bcm7xxx_28nm_afe_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_afe_config_init,
+ .resume = bcm7xxx_28nm_resume,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_ID_BCM7445,
.config_init = bcm7xxx_28nm_config_init,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_config_init,
- .driver = { .owner = THIS_MODULE },
-}, {
- .name = "Broadcom BCM7XXX 28nm",
- .phy_id = PHY_ID_BCM7XXX_28,
- .phy_id_mask = PHY_BCM_OUI_MASK,
- .features = PHY_GBIT_FEATURES |
- SUPPORTED_Pause | SUPPORTED_Asym_Pause,
- .flags = PHY_IS_INTERNAL,
- .config_init = bcm7xxx_28nm_config_init,
- .config_aneg = genphy_config_aneg,
- .read_status = genphy_read_status,
- .suspend = bcm7xxx_suspend,
- .resume = bcm7xxx_28nm_config_init,
+ .resume = bcm7xxx_28nm_afe_config_init,
.driver = { .owner = THIS_MODULE },
}, {
.phy_id = PHY_BCM_OUI_4,
{ PHY_ID_BCM7366, 0xfffffff0, },
{ PHY_ID_BCM7439, 0xfffffff0, },
{ PHY_ID_BCM7445, 0xfffffff0, },
- { PHY_ID_BCM7XXX_28, 0xfffffc00 },
{ PHY_BCM_OUI_4, 0xffff0000 },
{ PHY_BCM_OUI_5, 0xffffff00 },
{ }
static int smsc_phy_config_init(struct phy_device *phydev)
{
+ int rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
+
+ if (rc < 0)
+ return rc;
+
+ /* Enable energy detect mode for this SMSC Transceivers */
+ rc = phy_write(phydev, MII_LAN83C185_CTRL_STATUS,
+ rc | MII_LAN83C185_EDPWRDOWN);
+ if (rc < 0)
+ return rc;
+
+ return smsc_phy_ack_interrupt(phydev);
+}
+
+static int smsc_phy_reset(struct phy_device *phydev)
+{
int rc = phy_read(phydev, MII_LAN83C185_SPECIAL_MODES);
if (rc < 0)
return rc;
rc = phy_read(phydev, MII_BMCR);
} while (rc & BMCR_RESET);
}
-
- rc = phy_read(phydev, MII_LAN83C185_CTRL_STATUS);
- if (rc < 0)
- return rc;
-
- /* Enable energy detect mode for this SMSC Transceivers */
- rc = phy_write(phydev, MII_LAN83C185_CTRL_STATUS,
- rc | MII_LAN83C185_EDPWRDOWN);
- if (rc < 0)
- return rc;
-
- return smsc_phy_ack_interrupt (phydev);
+ return 0;
}
static int lan911x_config_init(struct phy_device *phydev)
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
.config_aneg = genphy_config_aneg,
.read_status = genphy_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
.config_aneg = genphy_config_aneg,
.read_status = lan87xx_read_status,
.config_init = smsc_phy_config_init,
+ .soft_reset = smsc_phy_reset,
/* IRQ related */
.ack_interrupt = smsc_phy_ack_interrupt,
unsigned int best = 0;
struct pwm_lookup *p;
unsigned int match;
+ unsigned int period;
+ enum pwm_polarity polarity;
/* look up via DT first */
if (IS_ENABLED(CONFIG_OF) && dev && dev->of_node)
if (match > best) {
chip = pwmchip_find_by_name(p->provider);
index = p->index;
+ period = p->period;
+ polarity = p->polarity;
if (match != 3)
best = match;
if (IS_ERR(pwm))
return pwm;
- pwm_set_period(pwm, p->period);
- pwm_set_polarity(pwm, p->polarity);
+ pwm_set_period(pwm, period);
+ pwm_set_polarity(pwm, polarity);
return pwm;
const unsigned char *buf, int count)
{
struct raw3215_info *raw;
+ int i, written;
if (!tty)
return 0;
raw = (struct raw3215_info *) tty->driver_data;
- raw3215_write(raw, buf, count);
- return count;
+ written = count;
+ while (count > 0) {
+ for (i = 0; i < count; i++)
+ if (buf[i] == '\t' || buf[i] == '\n')
+ break;
+ raw3215_write(raw, buf, i);
+ count -= i;
+ buf += i;
+ if (count > 0) {
+ raw3215_putchar(raw, *buf);
+ count--;
+ buf++;
+ }
+ }
+ return written;
}
/*
driver->subtype = SYSTEM_TYPE_TTY;
driver->init_termios = tty_std_termios;
driver->init_termios.c_iflag = IGNBRK | IGNPAR;
- driver->init_termios.c_oflag = ONLCR | XTABS;
+ driver->init_termios.c_oflag = ONLCR;
driver->init_termios.c_lflag = ISIG;
driver->flags = TTY_DRIVER_REAL_RAW;
tty_set_operations(driver, &tty3215_ops);
driver->subtype = SYSTEM_TYPE_TTY;
driver->init_termios = tty_std_termios;
driver->init_termios.c_iflag = IGNBRK | IGNPAR;
- driver->init_termios.c_oflag = ONLCR | XTABS;
+ driver->init_termios.c_oflag = ONLCR;
driver->init_termios.c_lflag = ISIG | ECHO;
driver->flags = TTY_DRIVER_REAL_RAW;
tty_set_operations(driver, &sclp_ops);
#
# Makefile for the SuperH specific drivers.
#
-obj-$(CONFIG_SUPERH) += intc/
-obj-$(CONFIG_ARCH_SHMOBILE_LEGACY) += intc/
+obj-$(CONFIG_SH_INTC) += intc/
ifneq ($(CONFIG_COMMON_CLK),y)
obj-$(CONFIG_HAVE_CLK) += clk/
endif
config SH_INTC
- def_bool y
+ bool
select IRQ_DOMAIN
+if SH_INTC
+
comment "Interrupt controller options"
config INTC_USERIMASK
between system IRQs and the per-controller id tables.
If in doubt, say N.
+
+endif
struct {
unsigned tail;
+ unsigned completed_events;
spinlock_t completion_lock;
} ____cacheline_aligned_in_smp;
return ret;
}
+/* refill_reqs_available
+ * Updates the reqs_available reference counts used for tracking the
+ * number of free slots in the completion ring. This can be called
+ * from aio_complete() (to optimistically update reqs_available) or
+ * from aio_get_req() (the we're out of events case). It must be
+ * called holding ctx->completion_lock.
+ */
+static void refill_reqs_available(struct kioctx *ctx, unsigned head,
+ unsigned tail)
+{
+ unsigned events_in_ring, completed;
+
+ /* Clamp head since userland can write to it. */
+ head %= ctx->nr_events;
+ if (head <= tail)
+ events_in_ring = tail - head;
+ else
+ events_in_ring = ctx->nr_events - (head - tail);
+
+ completed = ctx->completed_events;
+ if (events_in_ring < completed)
+ completed -= events_in_ring;
+ else
+ completed = 0;
+
+ if (!completed)
+ return;
+
+ ctx->completed_events -= completed;
+ put_reqs_available(ctx, completed);
+}
+
+/* user_refill_reqs_available
+ * Called to refill reqs_available when aio_get_req() encounters an
+ * out of space in the completion ring.
+ */
+static void user_refill_reqs_available(struct kioctx *ctx)
+{
+ spin_lock_irq(&ctx->completion_lock);
+ if (ctx->completed_events) {
+ struct aio_ring *ring;
+ unsigned head;
+
+ /* Access of ring->head may race with aio_read_events_ring()
+ * here, but that's okay since whether we read the old version
+ * or the new version, and either will be valid. The important
+ * part is that head cannot pass tail since we prevent
+ * aio_complete() from updating tail by holding
+ * ctx->completion_lock. Even if head is invalid, the check
+ * against ctx->completed_events below will make sure we do the
+ * safe/right thing.
+ */
+ ring = kmap_atomic(ctx->ring_pages[0]);
+ head = ring->head;
+ kunmap_atomic(ring);
+
+ refill_reqs_available(ctx, head, ctx->tail);
+ }
+
+ spin_unlock_irq(&ctx->completion_lock);
+}
+
/* aio_get_req
* Allocate a slot for an aio request.
* Returns NULL if no requests are free.
{
struct kiocb *req;
- if (!get_reqs_available(ctx))
- return NULL;
+ if (!get_reqs_available(ctx)) {
+ user_refill_reqs_available(ctx);
+ if (!get_reqs_available(ctx))
+ return NULL;
+ }
req = kmem_cache_alloc(kiocb_cachep, GFP_KERNEL|__GFP_ZERO);
if (unlikely(!req))
struct kioctx *ctx = iocb->ki_ctx;
struct aio_ring *ring;
struct io_event *ev_page, *event;
+ unsigned tail, pos, head;
unsigned long flags;
- unsigned tail, pos;
/*
* Special case handling for sync iocbs:
ctx->tail = tail;
ring = kmap_atomic(ctx->ring_pages[0]);
+ head = ring->head;
ring->tail = tail;
kunmap_atomic(ring);
flush_dcache_page(ctx->ring_pages[0]);
+ ctx->completed_events++;
+ if (ctx->completed_events > 1)
+ refill_reqs_available(ctx, head, tail);
spin_unlock_irqrestore(&ctx->completion_lock, flags);
pr_debug("added to ring %p at [%u]\n", iocb, tail);
/* everything turned out well, dispose of the aiocb. */
kiocb_free(iocb);
- put_reqs_available(ctx, 1);
/*
* We have to order our ring_info tail store above and test
#include <linux/list.h>
#include <linux/spinlock.h>
#include <linux/freezer.h>
-#include <linux/workqueue.h>
#include "async-thread.h"
#include "ctree.h"
struct __btrfs_workqueue *high;
};
-static inline struct __btrfs_workqueue
-*__btrfs_alloc_workqueue(const char *name, int flags, int max_active,
+static void normal_work_helper(struct btrfs_work *work);
+
+#define BTRFS_WORK_HELPER(name) \
+void btrfs_##name(struct work_struct *arg) \
+{ \
+ struct btrfs_work *work = container_of(arg, struct btrfs_work, \
+ normal_work); \
+ normal_work_helper(work); \
+}
+
+BTRFS_WORK_HELPER(worker_helper);
+BTRFS_WORK_HELPER(delalloc_helper);
+BTRFS_WORK_HELPER(flush_delalloc_helper);
+BTRFS_WORK_HELPER(cache_helper);
+BTRFS_WORK_HELPER(submit_helper);
+BTRFS_WORK_HELPER(fixup_helper);
+BTRFS_WORK_HELPER(endio_helper);
+BTRFS_WORK_HELPER(endio_meta_helper);
+BTRFS_WORK_HELPER(endio_meta_write_helper);
+BTRFS_WORK_HELPER(endio_raid56_helper);
+BTRFS_WORK_HELPER(rmw_helper);
+BTRFS_WORK_HELPER(endio_write_helper);
+BTRFS_WORK_HELPER(freespace_write_helper);
+BTRFS_WORK_HELPER(delayed_meta_helper);
+BTRFS_WORK_HELPER(readahead_helper);
+BTRFS_WORK_HELPER(qgroup_rescan_helper);
+BTRFS_WORK_HELPER(extent_refs_helper);
+BTRFS_WORK_HELPER(scrub_helper);
+BTRFS_WORK_HELPER(scrubwrc_helper);
+BTRFS_WORK_HELPER(scrubnc_helper);
+
+static struct __btrfs_workqueue *
+__btrfs_alloc_workqueue(const char *name, int flags, int max_active,
int thresh)
{
struct __btrfs_workqueue *ret = kzalloc(sizeof(*ret), GFP_NOFS);
spin_unlock_irqrestore(lock, flags);
}
-static void normal_work_helper(struct work_struct *arg)
+static void normal_work_helper(struct btrfs_work *work)
{
- struct btrfs_work *work;
struct __btrfs_workqueue *wq;
int need_order = 0;
- work = container_of(arg, struct btrfs_work, normal_work);
/*
* We should not touch things inside work in the following cases:
* 1) after work->func() if it has no ordered_free
trace_btrfs_all_work_done(work);
}
-void btrfs_init_work(struct btrfs_work *work,
+void btrfs_init_work(struct btrfs_work *work, btrfs_work_func_t uniq_func,
btrfs_func_t func,
btrfs_func_t ordered_func,
btrfs_func_t ordered_free)
work->func = func;
work->ordered_func = ordered_func;
work->ordered_free = ordered_free;
- INIT_WORK(&work->normal_work, normal_work_helper);
+ INIT_WORK(&work->normal_work, uniq_func);
INIT_LIST_HEAD(&work->ordered_list);
work->flags = 0;
}
#ifndef __BTRFS_ASYNC_THREAD_
#define __BTRFS_ASYNC_THREAD_
+#include <linux/workqueue.h>
struct btrfs_workqueue;
/* Internal use only */
struct __btrfs_workqueue;
struct btrfs_work;
typedef void (*btrfs_func_t)(struct btrfs_work *arg);
+typedef void (*btrfs_work_func_t)(struct work_struct *arg);
struct btrfs_work {
btrfs_func_t func;
unsigned long flags;
};
+#define BTRFS_WORK_HELPER_PROTO(name) \
+void btrfs_##name(struct work_struct *arg)
+
+BTRFS_WORK_HELPER_PROTO(worker_helper);
+BTRFS_WORK_HELPER_PROTO(delalloc_helper);
+BTRFS_WORK_HELPER_PROTO(flush_delalloc_helper);
+BTRFS_WORK_HELPER_PROTO(cache_helper);
+BTRFS_WORK_HELPER_PROTO(submit_helper);
+BTRFS_WORK_HELPER_PROTO(fixup_helper);
+BTRFS_WORK_HELPER_PROTO(endio_helper);
+BTRFS_WORK_HELPER_PROTO(endio_meta_helper);
+BTRFS_WORK_HELPER_PROTO(endio_meta_write_helper);
+BTRFS_WORK_HELPER_PROTO(endio_raid56_helper);
+BTRFS_WORK_HELPER_PROTO(rmw_helper);
+BTRFS_WORK_HELPER_PROTO(endio_write_helper);
+BTRFS_WORK_HELPER_PROTO(freespace_write_helper);
+BTRFS_WORK_HELPER_PROTO(delayed_meta_helper);
+BTRFS_WORK_HELPER_PROTO(readahead_helper);
+BTRFS_WORK_HELPER_PROTO(qgroup_rescan_helper);
+BTRFS_WORK_HELPER_PROTO(extent_refs_helper);
+BTRFS_WORK_HELPER_PROTO(scrub_helper);
+BTRFS_WORK_HELPER_PROTO(scrubwrc_helper);
+BTRFS_WORK_HELPER_PROTO(scrubnc_helper);
+
struct btrfs_workqueue *btrfs_alloc_workqueue(const char *name,
int flags,
int max_active,
int thresh);
-void btrfs_init_work(struct btrfs_work *work,
+void btrfs_init_work(struct btrfs_work *work, btrfs_work_func_t helper,
btrfs_func_t func,
btrfs_func_t ordered_func,
btrfs_func_t ordered_free);
return -ENOMEM;
async_work->delayed_root = delayed_root;
- btrfs_init_work(&async_work->work, btrfs_async_run_delayed_root,
- NULL, NULL);
+ btrfs_init_work(&async_work->work, btrfs_delayed_meta_helper,
+ btrfs_async_run_delayed_root, NULL, NULL);
async_work->nr = nr;
btrfs_queue_work(root->fs_info->delayed_workers, &async_work->work);
#include "btrfs_inode.h"
#include "volumes.h"
#include "print-tree.h"
-#include "async-thread.h"
#include "locking.h"
#include "tree-log.h"
#include "free-space-cache.h"
{
struct end_io_wq *end_io_wq = bio->bi_private;
struct btrfs_fs_info *fs_info;
+ struct btrfs_workqueue *wq;
+ btrfs_work_func_t func;
fs_info = end_io_wq->info;
end_io_wq->error = err;
- btrfs_init_work(&end_io_wq->work, end_workqueue_fn, NULL, NULL);
if (bio->bi_rw & REQ_WRITE) {
- if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA)
- btrfs_queue_work(fs_info->endio_meta_write_workers,
- &end_io_wq->work);
- else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE)
- btrfs_queue_work(fs_info->endio_freespace_worker,
- &end_io_wq->work);
- else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
- btrfs_queue_work(fs_info->endio_raid56_workers,
- &end_io_wq->work);
- else
- btrfs_queue_work(fs_info->endio_write_workers,
- &end_io_wq->work);
+ if (end_io_wq->metadata == BTRFS_WQ_ENDIO_METADATA) {
+ wq = fs_info->endio_meta_write_workers;
+ func = btrfs_endio_meta_write_helper;
+ } else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_FREE_SPACE) {
+ wq = fs_info->endio_freespace_worker;
+ func = btrfs_freespace_write_helper;
+ } else if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) {
+ wq = fs_info->endio_raid56_workers;
+ func = btrfs_endio_raid56_helper;
+ } else {
+ wq = fs_info->endio_write_workers;
+ func = btrfs_endio_write_helper;
+ }
} else {
- if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56)
- btrfs_queue_work(fs_info->endio_raid56_workers,
- &end_io_wq->work);
- else if (end_io_wq->metadata)
- btrfs_queue_work(fs_info->endio_meta_workers,
- &end_io_wq->work);
- else
- btrfs_queue_work(fs_info->endio_workers,
- &end_io_wq->work);
+ if (end_io_wq->metadata == BTRFS_WQ_ENDIO_RAID56) {
+ wq = fs_info->endio_raid56_workers;
+ func = btrfs_endio_raid56_helper;
+ } else if (end_io_wq->metadata) {
+ wq = fs_info->endio_meta_workers;
+ func = btrfs_endio_meta_helper;
+ } else {
+ wq = fs_info->endio_workers;
+ func = btrfs_endio_helper;
+ }
}
+
+ btrfs_init_work(&end_io_wq->work, func, end_workqueue_fn, NULL, NULL);
+ btrfs_queue_work(wq, &end_io_wq->work);
}
/*
async->submit_bio_start = submit_bio_start;
async->submit_bio_done = submit_bio_done;
- btrfs_init_work(&async->work, run_one_async_start,
+ btrfs_init_work(&async->work, btrfs_worker_helper, run_one_async_start,
run_one_async_done, run_one_async_free);
async->bio_flags = bio_flags;
btrfs_set_stack_device_generation(dev_item, 0);
btrfs_set_stack_device_type(dev_item, dev->type);
btrfs_set_stack_device_id(dev_item, dev->devid);
- btrfs_set_stack_device_total_bytes(dev_item, dev->total_bytes);
+ btrfs_set_stack_device_total_bytes(dev_item,
+ dev->disk_total_bytes);
btrfs_set_stack_device_bytes_used(dev_item, dev->bytes_used);
btrfs_set_stack_device_io_align(dev_item, dev->io_align);
btrfs_set_stack_device_io_width(dev_item, dev->io_width);
caching_ctl->block_group = cache;
caching_ctl->progress = cache->key.objectid;
atomic_set(&caching_ctl->count, 1);
- btrfs_init_work(&caching_ctl->work, caching_thread, NULL, NULL);
+ btrfs_init_work(&caching_ctl->work, btrfs_cache_helper,
+ caching_thread, NULL, NULL);
spin_lock(&cache->lock);
/*
async->sync = 0;
init_completion(&async->wait);
- btrfs_init_work(&async->work, delayed_ref_async_start,
- NULL, NULL);
+ btrfs_init_work(&async->work, btrfs_extent_refs_helper,
+ delayed_ref_async_start, NULL, NULL);
btrfs_queue_work(root->fs_info->extent_workers, &async->work);
*/
static u64 btrfs_reduce_alloc_profile(struct btrfs_root *root, u64 flags)
{
- /*
- * we add in the count of missing devices because we want
- * to make sure that any RAID levels on a degraded FS
- * continue to be honored.
- */
- u64 num_devices = root->fs_info->fs_devices->rw_devices +
- root->fs_info->fs_devices->missing_devices;
+ u64 num_devices = root->fs_info->fs_devices->rw_devices;
u64 target;
u64 tmp;
if (stripped)
return extended_to_chunk(stripped);
- /*
- * we add in the count of missing devices because we want
- * to make sure that any RAID levels on a degraded FS
- * continue to be honored.
- */
- num_devices = root->fs_info->fs_devices->rw_devices +
- root->fs_info->fs_devices->missing_devices;
+ num_devices = root->fs_info->fs_devices->rw_devices;
stripped = BTRFS_BLOCK_GROUP_RAID0 |
BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6 |
test_bit(BIO_UPTODATE, &bio->bi_flags);
if (err)
uptodate = 0;
+ offset += len;
continue;
}
}
return -ENOMEM;
path->leave_spinning = 1;
- start = ALIGN(start, BTRFS_I(inode)->root->sectorsize);
- len = ALIGN(len, BTRFS_I(inode)->root->sectorsize);
+ start = round_down(start, BTRFS_I(inode)->root->sectorsize);
+ len = round_up(max, BTRFS_I(inode)->root->sectorsize) - start;
/*
* lookup the last file extent. We're not using i_size here
{
if (filp->private_data)
btrfs_ioctl_trans_end(filp);
- filemap_flush(inode->i_mapping);
+ /*
+ * ordered_data_close is set by settattr when we are about to truncate
+ * a file from a non-zero size to a zero size. This tries to
+ * flush down new bytes that may have been written if the
+ * application were using truncate to replace a file in place.
+ */
+ if (test_and_clear_bit(BTRFS_INODE_ORDERED_DATA_CLOSE,
+ &BTRFS_I(inode)->runtime_flags))
+ filemap_flush(inode->i_mapping);
return 0;
}
goto out;
}
- if (hole_mergeable(inode, leaf, path->slots[0]+1, offset, end)) {
+ if (hole_mergeable(inode, leaf, path->slots[0], offset, end)) {
u64 num_bytes;
- path->slots[0]++;
key.offset = offset;
btrfs_set_item_key_safe(root, path, &key);
fi = btrfs_item_ptr(leaf, path->slots[0],
goto out_only_mutex;
}
- lockstart = round_up(offset , BTRFS_I(inode)->root->sectorsize);
+ lockstart = round_up(offset, BTRFS_I(inode)->root->sectorsize);
lockend = round_down(offset + len,
BTRFS_I(inode)->root->sectorsize) - 1;
same_page = ((offset >> PAGE_CACHE_SHIFT) ==
tail_start + tail_len, 0, 1);
if (ret)
goto out_only_mutex;
- }
+ }
}
}
async_cow->end = cur_end;
INIT_LIST_HEAD(&async_cow->extents);
- btrfs_init_work(&async_cow->work, async_cow_start,
- async_cow_submit, async_cow_free);
+ btrfs_init_work(&async_cow->work,
+ btrfs_delalloc_helper,
+ async_cow_start, async_cow_submit,
+ async_cow_free);
nr_pages = (cur_end - start + PAGE_CACHE_SIZE) >>
PAGE_CACHE_SHIFT;
SetPageChecked(page);
page_cache_get(page);
- btrfs_init_work(&fixup->work, btrfs_writepage_fixup_worker, NULL, NULL);
+ btrfs_init_work(&fixup->work, btrfs_fixup_helper,
+ btrfs_writepage_fixup_worker, NULL, NULL);
fixup->page = page;
btrfs_queue_work(root->fs_info->fixup_workers, &fixup->work);
return -EBUSY;
struct inode *inode = page->mapping->host;
struct btrfs_root *root = BTRFS_I(inode)->root;
struct btrfs_ordered_extent *ordered_extent = NULL;
- struct btrfs_workqueue *workers;
+ struct btrfs_workqueue *wq;
+ btrfs_work_func_t func;
trace_btrfs_writepage_end_io_hook(page, start, end, uptodate);
end - start + 1, uptodate))
return 0;
- btrfs_init_work(&ordered_extent->work, finish_ordered_fn, NULL, NULL);
+ if (btrfs_is_free_space_inode(inode)) {
+ wq = root->fs_info->endio_freespace_worker;
+ func = btrfs_freespace_write_helper;
+ } else {
+ wq = root->fs_info->endio_write_workers;
+ func = btrfs_endio_write_helper;
+ }
- if (btrfs_is_free_space_inode(inode))
- workers = root->fs_info->endio_freespace_worker;
- else
- workers = root->fs_info->endio_write_workers;
- btrfs_queue_work(workers, &ordered_extent->work);
+ btrfs_init_work(&ordered_extent->work, func, finish_ordered_fn, NULL,
+ NULL);
+ btrfs_queue_work(wq, &ordered_extent->work);
return 0;
}
clear_bit(EXTENT_FLAG_LOGGING, &em->flags);
remove_extent_mapping(map_tree, em);
free_extent_map(em);
+ if (need_resched()) {
+ write_unlock(&map_tree->lock);
+ cond_resched();
+ write_lock(&map_tree->lock);
+ }
}
write_unlock(&map_tree->lock);
&cached_state, GFP_NOFS);
free_extent_state(state);
+ cond_resched();
spin_lock(&io_tree->lock);
}
spin_unlock(&io_tree->lock);
iput(inode);
inode = ERR_PTR(ret);
}
+ /*
+ * If orphan cleanup did remove any orphans, it means the tree
+ * was modified and therefore the commit root is not the same as
+ * the current root anymore. This is a problem, because send
+ * uses the commit root and therefore can see inode items that
+ * don't exist in the current root anymore, and for example make
+ * calls to btrfs_iget, which will do tree lookups based on the
+ * current root and not on the commit root. Those lookups will
+ * fail, returning a -ESTALE error, and making send fail with
+ * that error. So make sure a send does not see any orphans we
+ * have just removed, and that it will see the same inodes
+ * regardless of whether a transaction commit happened before
+ * it started (meaning that the commit root will be the same as
+ * the current root) or not.
+ */
+ if (sub_root->node != sub_root->commit_root) {
+ u64 sub_flags = btrfs_root_flags(&sub_root->root_item);
+
+ if (sub_flags & BTRFS_ROOT_SUBVOL_RDONLY) {
+ struct extent_buffer *eb;
+
+ /*
+ * Assert we can't have races between dentry
+ * lookup called through the snapshot creation
+ * ioctl and the VFS.
+ */
+ ASSERT(mutex_is_locked(&dir->i_mutex));
+
+ down_write(&root->fs_info->commit_root_sem);
+ eb = sub_root->commit_root;
+ sub_root->commit_root =
+ btrfs_root_node(sub_root);
+ up_write(&root->fs_info->commit_root_sem);
+ free_extent_buffer(eb);
+ }
+ }
}
return inode;
}
/*
+ * O_TMPFILE, set link count to 0, so that after this point,
+ * we fill in an inode item with the correct link count.
+ */
+ if (!name)
+ set_nlink(inode, 0);
+
+ /*
* we have to initialize this early, so we can reclaim the inode
* number if we fail afterwards in this function.
*/
static int merge_extent_mapping(struct extent_map_tree *em_tree,
struct extent_map *existing,
struct extent_map *em,
- u64 map_start, u64 map_len)
+ u64 map_start)
{
u64 start_diff;
BUG_ON(map_start < em->start || map_start >= extent_map_end(em));
start_diff = map_start - em->start;
em->start = map_start;
- em->len = map_len;
+ em->len = existing->start - em->start;
if (em->block_start < EXTENT_MAP_LAST_BYTE &&
!test_bit(EXTENT_FLAG_COMPRESSED, &em->flags)) {
em->block_start += start_diff;
goto not_found;
if (start + len <= found_key.offset)
goto not_found;
+ if (start > found_key.offset)
+ goto next;
em->start = start;
em->orig_start = start;
em->len = found_key.offset - start;
em->len);
if (existing) {
err = merge_extent_mapping(em_tree, existing,
- em, start,
- root->sectorsize);
+ em, start);
free_extent_map(existing);
if (err) {
free_extent_map(em);
if (!ret)
goto out_test;
- btrfs_init_work(&ordered->work, finish_ordered_fn, NULL, NULL);
+ btrfs_init_work(&ordered->work, btrfs_endio_write_helper,
+ finish_ordered_fn, NULL, NULL);
btrfs_queue_work(root->fs_info->endio_write_workers,
&ordered->work);
out_test:
map_length = orig_bio->bi_iter.bi_size;
ret = btrfs_map_block(root->fs_info, rw, start_sector << 9,
&map_length, NULL, 0);
- if (ret) {
- bio_put(orig_bio);
+ if (ret)
return -EIO;
- }
if (map_length >= orig_bio->bi_iter.bi_size) {
bio = orig_bio;
bio = btrfs_dio_bio_alloc(orig_bio->bi_bdev, start_sector, GFP_NOFS);
if (!bio)
return -ENOMEM;
+
bio->bi_private = dip;
bio->bi_end_io = btrfs_end_dio_bio;
atomic_inc(&dip->pending_bios);
count = iov_iter_count(iter);
if (test_bit(BTRFS_INODE_HAS_ASYNC_EXTENT,
&BTRFS_I(inode)->runtime_flags))
- filemap_fdatawrite_range(inode->i_mapping, offset, count);
+ filemap_fdatawrite_range(inode->i_mapping, offset,
+ offset + count - 1);
if (rw & WRITE) {
/*
work->inode = inode;
work->wait = wait;
work->delay_iput = delay_iput;
- btrfs_init_work(&work->work, btrfs_run_delalloc_work, NULL, NULL);
+ WARN_ON_ONCE(!inode);
+ btrfs_init_work(&work->work, btrfs_flush_delalloc_helper,
+ btrfs_run_delalloc_work, NULL, NULL);
return work;
}
if (ret)
goto out;
+ /*
+ * We set number of links to 0 in btrfs_new_inode(), and here we set
+ * it to 1 because d_tmpfile() will issue a warning if the count is 0,
+ * through:
+ *
+ * d_tmpfile() -> inode_dec_link_count() -> drop_nlink()
+ */
+ set_nlink(inode, 1);
d_tmpfile(dentry, inode);
mark_inode_dirty(inode);
if (ret)
goto fail;
- ret = btrfs_orphan_cleanup(pending_snapshot->snap);
- if (ret)
- goto fail;
-
- /*
- * If orphan cleanup did remove any orphans, it means the tree was
- * modified and therefore the commit root is not the same as the
- * current root anymore. This is a problem, because send uses the
- * commit root and therefore can see inode items that don't exist
- * in the current root anymore, and for example make calls to
- * btrfs_iget, which will do tree lookups based on the current root
- * and not on the commit root. Those lookups will fail, returning a
- * -ESTALE error, and making send fail with that error. So make sure
- * a send does not see any orphans we have just removed, and that it
- * will see the same inodes regardless of whether a transaction
- * commit happened before it started (meaning that the commit root
- * will be the same as the current root) or not.
- */
- if (readonly && pending_snapshot->snap->node !=
- pending_snapshot->snap->commit_root) {
- trans = btrfs_join_transaction(pending_snapshot->snap);
- if (IS_ERR(trans) && PTR_ERR(trans) != -ENOENT) {
- ret = PTR_ERR(trans);
- goto fail;
- }
- if (!IS_ERR(trans)) {
- ret = btrfs_commit_transaction(trans,
- pending_snapshot->snap);
- if (ret)
- goto fail;
- }
- }
-
inode = btrfs_lookup_dentry(dentry->d_parent->d_inode, dentry);
if (IS_ERR(inode)) {
ret = PTR_ERR(inode);
btrfs_mark_buffer_dirty(leaf);
btrfs_release_path(path);
- last_dest_end = new_key.offset + datal;
+ last_dest_end = ALIGN(new_key.offset + datal,
+ root->sectorsize);
ret = clone_finish_inode_update(trans, inode,
last_dest_end,
destoff, olen);
spin_unlock(&root->ordered_extent_lock);
btrfs_init_work(&ordered->flush_work,
+ btrfs_flush_delalloc_helper,
btrfs_run_ordered_extent_work, NULL, NULL);
list_add_tail(&ordered->work_list, &works);
btrfs_queue_work(root->fs_info->flush_workers,
elem.seq, &roots);
btrfs_put_tree_mod_seq(fs_info, &elem);
if (ret < 0)
- return ret;
+ goto out;
if (roots->nnodes != 1)
goto out;
memset(&fs_info->qgroup_rescan_work, 0,
sizeof(fs_info->qgroup_rescan_work));
btrfs_init_work(&fs_info->qgroup_rescan_work,
+ btrfs_qgroup_rescan_helper,
btrfs_qgroup_rescan_worker, NULL, NULL);
if (ret) {
static void async_rmw_stripe(struct btrfs_raid_bio *rbio)
{
- btrfs_init_work(&rbio->work, rmw_work, NULL, NULL);
+ btrfs_init_work(&rbio->work, btrfs_rmw_helper,
+ rmw_work, NULL, NULL);
btrfs_queue_work(rbio->fs_info->rmw_workers,
&rbio->work);
static void async_read_rebuild(struct btrfs_raid_bio *rbio)
{
- btrfs_init_work(&rbio->work, read_rebuild_work, NULL, NULL);
+ btrfs_init_work(&rbio->work, btrfs_rmw_helper,
+ read_rebuild_work, NULL, NULL);
btrfs_queue_work(rbio->fs_info->rmw_workers,
&rbio->work);
plug = container_of(cb, struct btrfs_plug_cb, cb);
if (from_schedule) {
- btrfs_init_work(&plug->work, unplug_work, NULL, NULL);
+ btrfs_init_work(&plug->work, btrfs_rmw_helper,
+ unplug_work, NULL, NULL);
btrfs_queue_work(plug->info->rmw_workers,
&plug->work);
return;
/* FIXME we cannot handle this properly right now */
BUG();
}
- btrfs_init_work(&rmw->work, reada_start_machine_worker, NULL, NULL);
+ btrfs_init_work(&rmw->work, btrfs_readahead_helper,
+ reada_start_machine_worker, NULL, NULL);
rmw->fs_info = fs_info;
btrfs_queue_work(fs_info->readahead_workers, &rmw->work);
sbio->index = i;
sbio->sctx = sctx;
sbio->page_count = 0;
- btrfs_init_work(&sbio->work, scrub_bio_end_io_worker,
- NULL, NULL);
+ btrfs_init_work(&sbio->work, btrfs_scrub_helper,
+ scrub_bio_end_io_worker, NULL, NULL);
if (i != SCRUB_BIOS_PER_SCTX - 1)
sctx->bios[i]->next_free = i + 1;
fixup_nodatasum->root = fs_info->extent_root;
fixup_nodatasum->mirror_num = failed_mirror_index + 1;
scrub_pending_trans_workers_inc(sctx);
- btrfs_init_work(&fixup_nodatasum->work, scrub_fixup_nodatasum,
- NULL, NULL);
+ btrfs_init_work(&fixup_nodatasum->work, btrfs_scrub_helper,
+ scrub_fixup_nodatasum, NULL, NULL);
btrfs_queue_work(fs_info->scrub_workers,
&fixup_nodatasum->work);
goto out;
sbio->err = err;
sbio->bio = bio;
- btrfs_init_work(&sbio->work, scrub_wr_bio_end_io_worker, NULL, NULL);
+ btrfs_init_work(&sbio->work, btrfs_scrubwrc_helper,
+ scrub_wr_bio_end_io_worker, NULL, NULL);
btrfs_queue_work(fs_info->scrub_wr_completion_workers, &sbio->work);
}
struct scrub_ctx *sctx;
int ret;
struct btrfs_device *dev;
+ struct rcu_string *name;
if (btrfs_fs_closing(fs_info))
return -EINVAL;
return -ENODEV;
}
+ if (!is_dev_replace && !readonly && !dev->writeable) {
+ mutex_unlock(&fs_info->fs_devices->device_list_mutex);
+ rcu_read_lock();
+ name = rcu_dereference(dev->name);
+ btrfs_err(fs_info, "scrub: device %s is not writable",
+ name->str);
+ rcu_read_unlock();
+ return -EROFS;
+ }
+
mutex_lock(&fs_info->scrub_lock);
if (!dev->in_fs_metadata || dev->is_tgtdev_for_dev_replace) {
mutex_unlock(&fs_info->scrub_lock);
nocow_ctx->len = len;
nocow_ctx->mirror_num = mirror_num;
nocow_ctx->physical_for_dev_replace = physical_for_dev_replace;
- btrfs_init_work(&nocow_ctx->work, copy_nocow_pages_worker, NULL, NULL);
+ btrfs_init_work(&nocow_ctx->work, btrfs_scrubnc_helper,
+ copy_nocow_pages_worker, NULL, NULL);
INIT_LIST_HEAD(&nocow_ctx->inodes);
btrfs_queue_work(fs_info->scrub_nocow_workers,
&nocow_ctx->work);
if (!fs_info->device_dir_kobj)
return -EINVAL;
- if (one_device) {
+ if (one_device && one_device->bdev) {
disk = one_device->bdev->bd_part;
disk_kobj = &part_to_dev(disk)->kobj;
struct list_head ordered_sums;
int skip_csum = BTRFS_I(inode)->flags & BTRFS_INODE_NODATASUM;
bool has_extents = false;
- bool need_find_last_extent = (*last_extent == 0);
+ bool need_find_last_extent = true;
bool done = false;
INIT_LIST_HEAD(&ordered_sums);
*/
if (ins_keys[i].type == BTRFS_EXTENT_DATA_KEY) {
has_extents = true;
- if (need_find_last_extent &&
- first_key.objectid == (u64)-1)
+ if (first_key.objectid == (u64)-1)
first_key = ins_keys[i];
} else {
need_find_last_extent = false;
if (!has_extents)
return ret;
+ if (need_find_last_extent && *last_extent == first_key.offset) {
+ /*
+ * We don't have any leafs between our current one and the one
+ * we processed before that can have file extent items for our
+ * inode (and have a generation number smaller than our current
+ * transaction id).
+ */
+ need_find_last_extent = false;
+ }
+
/*
* Because we use btrfs_search_forward we could skip leaves that were
* not modified and then assume *last_extent is valid when it really
0, 0);
if (ret)
break;
- *last_extent = offset + len;
+ *last_extent = extent_end;
}
/*
* Need to let the callers know we dropped the path so they should
ret = 1;
device->fs_devices = fs_devices;
} else if (!device->name || strcmp(device->name->str, path)) {
+ /*
+ * When FS is already mounted.
+ * 1. If you are here and if the device->name is NULL that
+ * means this device was missing at time of FS mount.
+ * 2. If you are here and if the device->name is different
+ * from 'path' that means either
+ * a. The same device disappeared and reappeared with
+ * different name. or
+ * b. The missing-disk-which-was-replaced, has
+ * reappeared now.
+ *
+ * We must allow 1 and 2a above. But 2b would be a spurious
+ * and unintentional.
+ *
+ * Further in case of 1 and 2a above, the disk at 'path'
+ * would have missed some transaction when it was away and
+ * in case of 2a the stale bdev has to be updated as well.
+ * 2b must not be allowed at all time.
+ */
+
+ /*
+ * As of now don't allow update to btrfs_fs_device through
+ * the btrfs dev scan cli, after FS has been mounted.
+ */
+ if (fs_devices->opened) {
+ return -EBUSY;
+ } else {
+ /*
+ * That is if the FS is _not_ mounted and if you
+ * are here, that means there is more than one
+ * disk with same uuid and devid.We keep the one
+ * with larger generation number or the last-in if
+ * generation are equal.
+ */
+ if (found_transid < device->generation)
+ return -EEXIST;
+ }
+
name = rcu_string_strdup(path, GFP_NOFS);
if (!name)
return -ENOMEM;
}
}
+ /*
+ * Unmount does not free the btrfs_device struct but would zero
+ * generation along with most of the other members. So just update
+ * it back. We need it to pick the disk with largest generation
+ * (as above).
+ */
+ if (!fs_devices->opened)
+ device->generation = found_transid;
+
if (found_transid > fs_devices->latest_trans) {
fs_devices->latest_devid = devid;
fs_devices->latest_trans = found_transid;
btrfs_set_device_io_align(leaf, dev_item, device->io_align);
btrfs_set_device_io_width(leaf, dev_item, device->io_width);
btrfs_set_device_sector_size(leaf, dev_item, device->sector_size);
- btrfs_set_device_total_bytes(leaf, dev_item, device->total_bytes);
+ btrfs_set_device_total_bytes(leaf, dev_item, device->disk_total_bytes);
btrfs_set_device_bytes_used(leaf, dev_item, device->bytes_used);
btrfs_set_device_group(leaf, dev_item, 0);
btrfs_set_device_seek_speed(leaf, dev_item, 0);
device->fs_devices->total_devices--;
if (device->missing)
- root->fs_info->fs_devices->missing_devices--;
+ device->fs_devices->missing_devices--;
next_device = list_entry(root->fs_info->fs_devices->devices.next,
struct btrfs_device, dev_list);
if (srcdev->bdev) {
fs_info->fs_devices->open_devices--;
- /* zero out the old super */
- btrfs_scratch_superblock(srcdev);
+ /*
+ * zero out the old super if it is not writable
+ * (e.g. seed device)
+ */
+ if (srcdev->writeable)
+ btrfs_scratch_superblock(srcdev);
}
call_rcu(&srcdev->rcu, free_device);
fs_devices->seeding = 0;
fs_devices->num_devices = 0;
fs_devices->open_devices = 0;
+ fs_devices->missing_devices = 0;
+ fs_devices->num_can_discard = 0;
+ fs_devices->rotating = 0;
fs_devices->seed = seed_devices;
generate_random_uuid(fs_devices->fsid);
else
generate_random_uuid(dev->uuid);
- btrfs_init_work(&dev->work, pending_bios_fn, NULL, NULL);
+ btrfs_init_work(&dev->work, btrfs_submit_helper,
+ pending_bios_fn, NULL, NULL);
return dev;
}
if (atomic_read(&c->io_count) == 0)
break;
ret = nfs_wait_bit_killable(&q.key);
- } while (atomic_read(&c->io_count) != 0);
+ } while (atomic_read(&c->io_count) != 0 && !ret);
finish_wait(wq, &q.wait);
return ret;
}
/*
* nfs_page_group_lock - lock the head of the page group
* @req - request in group that is to be locked
+ * @nonblock - if true don't block waiting for lock
*
* this lock must be held if modifying the page group list
*
- * returns result from wait_on_bit_lock: 0 on success, < 0 on error
+ * return 0 on success, < 0 on error: -EDELAY if nonblocking or the
+ * result from wait_on_bit_lock
+ *
+ * NOTE: calling with nonblock=false should always have set the
+ * lock bit (see fs/buffer.c and other uses of wait_on_bit_lock
+ * with TASK_UNINTERRUPTIBLE), so there is no need to check the result.
*/
int
-nfs_page_group_lock(struct nfs_page *req, bool wait)
+nfs_page_group_lock(struct nfs_page *req, bool nonblock)
{
struct nfs_page *head = req->wb_head;
- int ret;
WARN_ON_ONCE(head != head->wb_head);
- do {
- ret = wait_on_bit_lock(&head->wb_flags, PG_HEADLOCK,
- TASK_UNINTERRUPTIBLE);
- } while (wait && ret != 0);
+ if (!test_and_set_bit(PG_HEADLOCK, &head->wb_flags))
+ return 0;
- WARN_ON_ONCE(ret > 0);
- return ret;
+ if (!nonblock)
+ return wait_on_bit_lock(&head->wb_flags, PG_HEADLOCK,
+ TASK_UNINTERRUPTIBLE);
+
+ return -EAGAIN;
+}
+
+/*
+ * nfs_page_group_lock_wait - wait for the lock to clear, but don't grab it
+ * @req - a request in the group
+ *
+ * This is a blocking call to wait for the group lock to be cleared.
+ */
+void
+nfs_page_group_lock_wait(struct nfs_page *req)
+{
+ struct nfs_page *head = req->wb_head;
+
+ WARN_ON_ONCE(head != head->wb_head);
+
+ wait_on_bit(&head->wb_flags, PG_HEADLOCK,
+ TASK_UNINTERRUPTIBLE);
}
/*
{
bool ret;
- nfs_page_group_lock(req, true);
+ nfs_page_group_lock(req, false);
ret = nfs_page_group_sync_on_bit_locked(req, bit);
nfs_page_group_unlock(req);
struct nfs_pgio_header *hdr)
{
struct nfs_page *req;
- struct page **pages;
+ struct page **pages,
+ *last_page;
struct list_head *head = &desc->pg_list;
struct nfs_commit_info cinfo;
- unsigned int pagecount;
+ unsigned int pagecount, pageused;
pagecount = nfs_page_array_len(desc->pg_base, desc->pg_count);
if (!nfs_pgarray_set(&hdr->page_array, pagecount))
nfs_init_cinfo(&cinfo, desc->pg_inode, desc->pg_dreq);
pages = hdr->page_array.pagevec;
+ last_page = NULL;
+ pageused = 0;
while (!list_empty(head)) {
req = nfs_list_entry(head->next);
nfs_list_remove_request(req);
nfs_list_add_request(req, &hdr->pages);
- *pages++ = req->wb_page;
+
+ if (WARN_ON_ONCE(pageused >= pagecount))
+ return nfs_pgio_error(desc, hdr);
+
+ if (!last_page || last_page != req->wb_page) {
+ *pages++ = last_page = req->wb_page;
+ pageused++;
+ }
}
+ if (WARN_ON_ONCE(pageused != pagecount))
+ return nfs_pgio_error(desc, hdr);
if ((desc->pg_ioflags & FLUSH_COND_STABLE) &&
(desc->pg_moreio || nfs_reqs_to_commit(&cinfo)))
return false;
if (req_offset(req) != req_offset(prev) + prev->wb_bytes)
return false;
+ if (req->wb_page == prev->wb_page) {
+ if (req->wb_pgbase != prev->wb_pgbase + prev->wb_bytes)
+ return false;
+ } else {
+ if (req->wb_pgbase != 0 ||
+ prev->wb_pgbase + prev->wb_bytes != PAGE_CACHE_SIZE)
+ return false;
+ }
}
size = pgio->pg_ops->pg_test(pgio, prev, req);
WARN_ON_ONCE(size > req->wb_bytes);
struct nfs_page *subreq;
unsigned int bytes_left = 0;
unsigned int offset, pgbase;
- int ret;
- ret = nfs_page_group_lock(req, false);
- if (ret < 0) {
- desc->pg_error = ret;
- return 0;
- }
+ nfs_page_group_lock(req, false);
subreq = req;
bytes_left = subreq->wb_bytes;
if (desc->pg_recoalesce)
return 0;
/* retry add_request for this subreq */
- ret = nfs_page_group_lock(req, false);
- if (ret < 0) {
- desc->pg_error = ret;
- return 0;
- }
+ nfs_page_group_lock(req, false);
continue;
}
unsigned int pos = 0;
unsigned int len = nfs_page_length(req->wb_page);
- nfs_page_group_lock(req, true);
+ nfs_page_group_lock(req, false);
do {
tmp = nfs_page_group_search_locked(req->wb_head, pos);
return NULL;
}
- /* lock each request in the page group */
- ret = nfs_page_group_lock(head, false);
- if (ret < 0)
+ /* holding inode lock, so always make a non-blocking call to try the
+ * page group lock */
+ ret = nfs_page_group_lock(head, true);
+ if (ret < 0) {
+ spin_unlock(&inode->i_lock);
+
+ if (!nonblock && ret == -EAGAIN) {
+ nfs_page_group_lock_wait(head);
+ nfs_release_request(head);
+ goto try_again;
+ }
+
+ nfs_release_request(head);
return ERR_PTR(ret);
+ }
+
+ /* lock each request in the page group */
subreq = head;
do {
/*
{0x1002, 0x1315, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1316, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x1317, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
+ {0x1002, 0x1318, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x131D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_KAVERI|RADEON_NEW_MEMMAP|RADEON_IS_IGP}, \
{0x1002, 0x6601, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6602, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6603, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6604, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6605, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6606, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6607, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6608, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6610, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6611, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6613, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6631, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_OLAND|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6640, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6641, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6646, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x6647, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6649, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6650, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6651, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_BONAIRE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6829, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682A, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682B, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
+ {0x1002, 0x682C, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682D, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x682F, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
{0x1002, 0x6830, PCI_ANY_ID, PCI_ANY_ID, 0, 0, CHIP_VERDE|RADEON_IS_MOBILITY|RADEON_NEW_MEMMAP}, \
#define PHY_ID_BCM7366 0x600d8490
#define PHY_ID_BCM7439 0x600d8480
#define PHY_ID_BCM7445 0x600d8510
-#define PHY_ID_BCM7XXX_28 0x600d8400
#define PHY_BCM_OUI_MASK 0xfffffc00
#define PHY_BCM_OUI_1 0x00206000
FTRACE_OPS_FL_DELETED = 1 << 8,
};
+#ifdef CONFIG_DYNAMIC_FTRACE
+/* The hash used to know what functions callbacks trace */
+struct ftrace_ops_hash {
+ struct ftrace_hash *notrace_hash;
+ struct ftrace_hash *filter_hash;
+ struct mutex regex_lock;
+};
+#endif
+
/*
* Note, ftrace_ops can be referenced outside of RCU protection.
* (Although, for perf, the control ops prevent that). If ftrace_ops is
int __percpu *disabled;
#ifdef CONFIG_DYNAMIC_FTRACE
int nr_trampolines;
- struct ftrace_hash *notrace_hash;
- struct ftrace_hash *filter_hash;
+ struct ftrace_ops_hash local_hash;
+ struct ftrace_ops_hash *func_hash;
struct ftrace_hash *tramp_hash;
- struct mutex regex_lock;
unsigned long trampoline;
#endif
};
*/
struct gpio_desc;
-#ifdef CONFIG_GPIOLIB
-
#define GPIOD_FLAGS_BIT_DIR_SET BIT(0)
#define GPIOD_FLAGS_BIT_DIR_OUT BIT(1)
#define GPIOD_FLAGS_BIT_DIR_VAL BIT(2)
GPIOD_FLAGS_BIT_DIR_VAL,
};
+#ifdef CONFIG_GPIOLIB
+
/* Acquire and dispose GPIOs */
struct gpio_desc *__must_check __gpiod_get(struct device *dev,
const char *con_id,
#define HID_CONNECT_HIDDEV 0x08
#define HID_CONNECT_HIDDEV_FORCE 0x10
#define HID_CONNECT_FF 0x20
+#define HID_CONNECT_DRIVER 0x40
#define HID_CONNECT_DEFAULT (HID_CONNECT_HIDINPUT|HID_CONNECT_HIDRAW| \
HID_CONNECT_HIDDEV|HID_CONNECT_FF)
#define HID_QUIRK_HIDINPUT_FORCE 0x00000080
#define HID_QUIRK_NO_EMPTY_INPUT 0x00000100
#define HID_QUIRK_NO_INIT_INPUT_REPORTS 0x00000200
+#define HID_QUIRK_ALWAYS_POLL 0x00000400
#define HID_QUIRK_SKIP_OUTPUT_REPORTS 0x00010000
#define HID_QUIRK_SKIP_OUTPUT_REPORT_ID 0x00020000
#define HID_QUIRK_NO_OUTPUT_REPORTS_ON_INTR_EP 0x00040000
#define HID_CLAIMED_INPUT 1
#define HID_CLAIMED_HIDDEV 2
#define HID_CLAIMED_HIDRAW 4
+#define HID_CLAIMED_DRIVER 8
#define HID_STAT_ADDED 1
#define HID_STAT_PARSED 2
extern void nfs_unlock_request(struct nfs_page *req);
extern void nfs_unlock_and_release_request(struct nfs_page *);
extern int nfs_page_group_lock(struct nfs_page *, bool);
+extern void nfs_page_group_lock_wait(struct nfs_page *);
extern void nfs_page_group_unlock(struct nfs_page *);
extern bool nfs_page_group_sync_on_bit(struct nfs_page *, unsigned int);
__SYSCALL(__NR_seccomp, sys_seccomp)
#define __NR_getrandom 278
__SYSCALL(__NR_getrandom, sys_getrandom)
+#define __NR_memfd_create 279
+__SYSCALL(__NR_memfd_create, sys_memfd_create)
#undef __NR_syscalls
-#define __NR_syscalls 279
+#define __NR_syscalls 280
/*
* All syscalls below here should go away really,
};
/* drm_radeon_cs_reloc.flags */
+#define RADEON_RELOC_PRIO_MASK (0xf << 0)
struct drm_radeon_cs_reloc {
uint32_t handle;
#include <linux/hid.h>
enum uhid_event_type {
- UHID_CREATE,
+ __UHID_LEGACY_CREATE,
UHID_DESTROY,
UHID_START,
UHID_STOP,
UHID_OPEN,
UHID_CLOSE,
UHID_OUTPUT,
- UHID_OUTPUT_EV, /* obsolete! */
- UHID_INPUT,
- UHID_FEATURE,
- UHID_FEATURE_ANSWER,
+ __UHID_LEGACY_OUTPUT_EV,
+ __UHID_LEGACY_INPUT,
+ UHID_GET_REPORT,
+ UHID_GET_REPORT_REPLY,
UHID_CREATE2,
UHID_INPUT2,
+ UHID_SET_REPORT,
+ UHID_SET_REPORT_REPLY,
};
-struct uhid_create_req {
- __u8 name[128];
- __u8 phys[64];
- __u8 uniq[64];
- __u8 __user *rd_data;
- __u16 rd_size;
-
- __u16 bus;
- __u32 vendor;
- __u32 product;
- __u32 version;
- __u32 country;
-} __attribute__((__packed__));
-
struct uhid_create2_req {
__u8 name[128];
__u8 phys[64];
__u8 rd_data[HID_MAX_DESCRIPTOR_SIZE];
} __attribute__((__packed__));
+enum uhid_dev_flag {
+ UHID_DEV_NUMBERED_FEATURE_REPORTS = (1ULL << 0),
+ UHID_DEV_NUMBERED_OUTPUT_REPORTS = (1ULL << 1),
+ UHID_DEV_NUMBERED_INPUT_REPORTS = (1ULL << 2),
+};
+
+struct uhid_start_req {
+ __u64 dev_flags;
+};
+
#define UHID_DATA_MAX 4096
enum uhid_report_type {
UHID_INPUT_REPORT,
};
-struct uhid_input_req {
+struct uhid_input2_req {
+ __u16 size;
+ __u8 data[UHID_DATA_MAX];
+} __attribute__((__packed__));
+
+struct uhid_output_req {
__u8 data[UHID_DATA_MAX];
__u16 size;
+ __u8 rtype;
} __attribute__((__packed__));
-struct uhid_input2_req {
+struct uhid_get_report_req {
+ __u32 id;
+ __u8 rnum;
+ __u8 rtype;
+} __attribute__((__packed__));
+
+struct uhid_get_report_reply_req {
+ __u32 id;
+ __u16 err;
__u16 size;
__u8 data[UHID_DATA_MAX];
} __attribute__((__packed__));
-struct uhid_output_req {
+struct uhid_set_report_req {
+ __u32 id;
+ __u8 rnum;
+ __u8 rtype;
+ __u16 size;
+ __u8 data[UHID_DATA_MAX];
+} __attribute__((__packed__));
+
+struct uhid_set_report_reply_req {
+ __u32 id;
+ __u16 err;
+} __attribute__((__packed__));
+
+/*
+ * Compat Layer
+ * All these commands and requests are obsolete. You should avoid using them in
+ * new code. We support them for backwards-compatibility, but you might not get
+ * access to new feature in case you use them.
+ */
+
+enum uhid_legacy_event_type {
+ UHID_CREATE = __UHID_LEGACY_CREATE,
+ UHID_OUTPUT_EV = __UHID_LEGACY_OUTPUT_EV,
+ UHID_INPUT = __UHID_LEGACY_INPUT,
+ UHID_FEATURE = UHID_GET_REPORT,
+ UHID_FEATURE_ANSWER = UHID_GET_REPORT_REPLY,
+};
+
+/* Obsolete! Use UHID_CREATE2. */
+struct uhid_create_req {
+ __u8 name[128];
+ __u8 phys[64];
+ __u8 uniq[64];
+ __u8 __user *rd_data;
+ __u16 rd_size;
+
+ __u16 bus;
+ __u32 vendor;
+ __u32 product;
+ __u32 version;
+ __u32 country;
+} __attribute__((__packed__));
+
+/* Obsolete! Use UHID_INPUT2. */
+struct uhid_input_req {
__u8 data[UHID_DATA_MAX];
__u16 size;
- __u8 rtype;
} __attribute__((__packed__));
-/* Obsolete! Newer kernels will no longer send these events but instead convert
- * it into raw output reports via UHID_OUTPUT. */
+/* Obsolete! Kernel uses UHID_OUTPUT exclusively now. */
struct uhid_output_ev_req {
__u16 type;
__u16 code;
__s32 value;
} __attribute__((__packed__));
+/* Obsolete! Kernel uses ABI compatible UHID_GET_REPORT. */
struct uhid_feature_req {
__u32 id;
__u8 rnum;
__u8 rtype;
} __attribute__((__packed__));
+/* Obsolete! Use ABI compatible UHID_GET_REPORT_REPLY. */
struct uhid_feature_answer_req {
__u32 id;
__u16 err;
__u8 data[UHID_DATA_MAX];
} __attribute__((__packed__));
+/*
+ * UHID Events
+ * All UHID events from and to the kernel are encoded as "struct uhid_event".
+ * The "type" field contains a UHID_* type identifier. All payload depends on
+ * that type and can be accessed via ev->u.XYZ accordingly.
+ * If user-space writes short events, they're extended with 0s by the kernel. If
+ * the kernel writes short events, user-space shall extend them with 0s.
+ */
+
struct uhid_event {
__u32 type;
struct uhid_output_req output;
struct uhid_output_ev_req output_ev;
struct uhid_feature_req feature;
+ struct uhid_get_report_req get_report;
struct uhid_feature_answer_req feature_answer;
+ struct uhid_get_report_reply_req get_report_reply;
struct uhid_create2_req create2;
struct uhid_input2_req input2;
+ struct uhid_set_report_req set_report;
+ struct uhid_set_report_reply_req set_report_reply;
+ struct uhid_start_req start;
} u;
} __attribute__((__packed__));
#include <linux/cgroup.h>
#include <linux/module.h>
#include <linux/mman.h>
+#include <linux/compat.h>
#include "internal.h"
return 0;
}
+#ifdef CONFIG_COMPAT
+static long perf_compat_ioctl(struct file *file, unsigned int cmd,
+ unsigned long arg)
+{
+ switch (_IOC_NR(cmd)) {
+ case _IOC_NR(PERF_EVENT_IOC_SET_FILTER):
+ case _IOC_NR(PERF_EVENT_IOC_ID):
+ /* Fix up pointer size (usually 4 -> 8 in 32-on-64-bit case */
+ if (_IOC_SIZE(cmd) == sizeof(compat_uptr_t)) {
+ cmd &= ~IOCSIZE_MASK;
+ cmd |= sizeof(void *) << IOCSIZE_SHIFT;
+ }
+ break;
+ }
+ return perf_ioctl(file, cmd, arg);
+}
+#else
+# define perf_compat_ioctl NULL
+#endif
+
int perf_event_task_enable(void)
{
struct perf_event *event;
.read = perf_read,
.poll = perf_poll,
.unlocked_ioctl = perf_ioctl,
- .compat_ioctl = perf_ioctl,
+ .compat_ioctl = perf_compat_ioctl,
.mmap = perf_mmap,
.fasync = perf_fasync,
};
unsigned long hash, flags = 0;
struct kretprobe_instance *ri;
- /*TODO: consider to only swap the RA after the last pre_handler fired */
+ /*
+ * To avoid deadlocks, prohibit return probing in NMI contexts,
+ * just skip the probe and increase the (inexact) 'nmissed'
+ * statistical counter, so that the user is informed that
+ * something happened:
+ */
+ if (unlikely(in_nmi())) {
+ rp->nmissed++;
+ return 0;
+ }
+
+ /* TODO: consider to only swap the RA after the last pre_handler fired */
hash = hash_ptr(current, KPROBE_HASH_BITS);
raw_spin_lock_irqsave(&rp->lock, flags);
if (!hlist_empty(&rp->free_instances)) {
#define FL_GLOBAL_CONTROL_MASK (FTRACE_OPS_FL_CONTROL)
#ifdef CONFIG_DYNAMIC_FTRACE
-#define INIT_REGEX_LOCK(opsname) \
- .regex_lock = __MUTEX_INITIALIZER(opsname.regex_lock),
+#define INIT_OPS_HASH(opsname) \
+ .func_hash = &opsname.local_hash, \
+ .local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock),
+#define ASSIGN_OPS_HASH(opsname, val) \
+ .func_hash = val, \
+ .local_hash.regex_lock = __MUTEX_INITIALIZER(opsname.local_hash.regex_lock),
#else
-#define INIT_REGEX_LOCK(opsname)
+#define INIT_OPS_HASH(opsname)
+#define ASSIGN_OPS_HASH(opsname, val)
#endif
static struct ftrace_ops ftrace_list_end __read_mostly = {
.func = ftrace_stub,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_STUB,
+ INIT_OPS_HASH(ftrace_list_end)
};
/* ftrace_enabled is a method to turn ftrace on or off */
{
#ifdef CONFIG_DYNAMIC_FTRACE
if (!(ops->flags & FTRACE_OPS_FL_INITIALIZED)) {
- mutex_init(&ops->regex_lock);
+ mutex_init(&ops->local_hash.regex_lock);
+ ops->func_hash = &ops->local_hash;
ops->flags |= FTRACE_OPS_FL_INITIALIZED;
}
#endif
static struct ftrace_ops ftrace_profile_ops __read_mostly = {
.func = function_profile_call,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(ftrace_profile_ops)
+ INIT_OPS_HASH(ftrace_profile_ops)
};
static int register_ftrace_profiler(void)
#define EMPTY_HASH ((struct ftrace_hash *)&empty_hash)
static struct ftrace_ops global_ops = {
- .func = ftrace_stub,
- .notrace_hash = EMPTY_HASH,
- .filter_hash = EMPTY_HASH,
- .flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(global_ops)
+ .func = ftrace_stub,
+ .local_hash.notrace_hash = EMPTY_HASH,
+ .local_hash.filter_hash = EMPTY_HASH,
+ INIT_OPS_HASH(global_ops)
+ .flags = FTRACE_OPS_FL_RECURSION_SAFE |
+ FTRACE_OPS_FL_INITIALIZED,
};
struct ftrace_page {
void ftrace_free_filter(struct ftrace_ops *ops)
{
ftrace_ops_init(ops);
- free_ftrace_hash(ops->filter_hash);
- free_ftrace_hash(ops->notrace_hash);
+ free_ftrace_hash(ops->func_hash->filter_hash);
+ free_ftrace_hash(ops->func_hash->notrace_hash);
}
static struct ftrace_hash *alloc_ftrace_hash(int size_bits)
}
static void
-ftrace_hash_rec_disable(struct ftrace_ops *ops, int filter_hash);
+ftrace_hash_rec_disable_modify(struct ftrace_ops *ops, int filter_hash);
static void
-ftrace_hash_rec_enable(struct ftrace_ops *ops, int filter_hash);
+ftrace_hash_rec_enable_modify(struct ftrace_ops *ops, int filter_hash);
static int
ftrace_hash_move(struct ftrace_ops *ops, int enable,
* Remove the current set, update the hash and add
* them back.
*/
- ftrace_hash_rec_disable(ops, enable);
+ ftrace_hash_rec_disable_modify(ops, enable);
old_hash = *dst;
rcu_assign_pointer(*dst, new_hash);
free_ftrace_hash_rcu(old_hash);
- ftrace_hash_rec_enable(ops, enable);
+ ftrace_hash_rec_enable_modify(ops, enable);
return 0;
}
return 0;
#endif
- filter_hash = rcu_dereference_raw_notrace(ops->filter_hash);
- notrace_hash = rcu_dereference_raw_notrace(ops->notrace_hash);
+ filter_hash = rcu_dereference_raw_notrace(ops->func_hash->filter_hash);
+ notrace_hash = rcu_dereference_raw_notrace(ops->func_hash->notrace_hash);
if ((ftrace_hash_empty(filter_hash) ||
ftrace_lookup_ip(filter_hash, ip)) &&
static void ftrace_remove_tramp(struct ftrace_ops *ops,
struct dyn_ftrace *rec)
{
- struct ftrace_func_entry *entry;
-
- entry = ftrace_lookup_ip(ops->tramp_hash, rec->ip);
- if (!entry)
+ /* If TRAMP is not set, no ops should have a trampoline for this */
+ if (!(rec->flags & FTRACE_FL_TRAMP))
return;
+ rec->flags &= ~FTRACE_FL_TRAMP;
+
+ if ((!ftrace_hash_empty(ops->func_hash->filter_hash) &&
+ !ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip)) ||
+ ftrace_lookup_ip(ops->func_hash->notrace_hash, rec->ip))
+ return;
/*
* The tramp_hash entry will be removed at time
* of update.
*/
ops->nr_trampolines--;
- rec->flags &= ~FTRACE_FL_TRAMP;
}
-static void ftrace_clear_tramps(struct dyn_ftrace *rec)
+static void ftrace_clear_tramps(struct dyn_ftrace *rec, struct ftrace_ops *ops)
{
struct ftrace_ops *op;
+ /* If TRAMP is not set, no ops should have a trampoline for this */
+ if (!(rec->flags & FTRACE_FL_TRAMP))
+ return;
+
do_for_each_ftrace_op(op, ftrace_ops_list) {
+ /*
+ * This function is called to clear other tramps
+ * not the one that is being updated.
+ */
+ if (op == ops)
+ continue;
if (op->nr_trampolines)
ftrace_remove_tramp(op, rec);
} while_for_each_ftrace_op(op);
* gets inversed.
*/
if (filter_hash) {
- hash = ops->filter_hash;
- other_hash = ops->notrace_hash;
+ hash = ops->func_hash->filter_hash;
+ other_hash = ops->func_hash->notrace_hash;
if (ftrace_hash_empty(hash))
all = 1;
} else {
inc = !inc;
- hash = ops->notrace_hash;
- other_hash = ops->filter_hash;
+ hash = ops->func_hash->notrace_hash;
+ other_hash = ops->func_hash->filter_hash;
/*
* If the notrace hash has no items,
* then there's nothing to do.
/*
* If we are adding another function callback
* to this function, and the previous had a
- * trampoline used, then we need to go back to
- * the default trampoline.
+ * custom trampoline in use, then we need to go
+ * back to the default trampoline.
*/
- rec->flags &= ~FTRACE_FL_TRAMP;
-
- /* remove trampolines from any ops for this rec */
- ftrace_clear_tramps(rec);
+ ftrace_clear_tramps(rec, ops);
}
/*
__ftrace_hash_rec_update(ops, filter_hash, 1);
}
+static void ftrace_hash_rec_update_modify(struct ftrace_ops *ops,
+ int filter_hash, int inc)
+{
+ struct ftrace_ops *op;
+
+ __ftrace_hash_rec_update(ops, filter_hash, inc);
+
+ if (ops->func_hash != &global_ops.local_hash)
+ return;
+
+ /*
+ * If the ops shares the global_ops hash, then we need to update
+ * all ops that are enabled and use this hash.
+ */
+ do_for_each_ftrace_op(op, ftrace_ops_list) {
+ /* Already done */
+ if (op == ops)
+ continue;
+ if (op->func_hash == &global_ops.local_hash)
+ __ftrace_hash_rec_update(op, filter_hash, inc);
+ } while_for_each_ftrace_op(op);
+}
+
+static void ftrace_hash_rec_disable_modify(struct ftrace_ops *ops,
+ int filter_hash)
+{
+ ftrace_hash_rec_update_modify(ops, filter_hash, 0);
+}
+
+static void ftrace_hash_rec_enable_modify(struct ftrace_ops *ops,
+ int filter_hash)
+{
+ ftrace_hash_rec_update_modify(ops, filter_hash, 1);
+}
+
static void print_ip_ins(const char *fmt, unsigned char *p)
{
int i;
if (rec->flags & FTRACE_FL_TRAMP) {
ops = ftrace_find_tramp_ops_new(rec);
if (FTRACE_WARN_ON(!ops || !ops->trampoline)) {
- pr_warning("Bad trampoline accounting at: %p (%pS)\n",
- (void *)rec->ip, (void *)rec->ip);
+ pr_warn("Bad trampoline accounting at: %p (%pS) (%lx)\n",
+ (void *)rec->ip, (void *)rec->ip, rec->flags);
/* Ftrace is shutting down, return anything */
return (unsigned long)FTRACE_ADDR;
}
return ftrace_make_call(rec, ftrace_addr);
case FTRACE_UPDATE_MAKE_NOP:
- return ftrace_make_nop(NULL, rec, ftrace_addr);
+ return ftrace_make_nop(NULL, rec, ftrace_old_addr);
case FTRACE_UPDATE_MODIFY_CALL:
return ftrace_modify_call(rec, ftrace_old_addr, ftrace_addr);
} while_for_each_ftrace_rec();
/* The number of recs in the hash must match nr_trampolines */
- FTRACE_WARN_ON(ops->tramp_hash->count != ops->nr_trampolines);
+ if (FTRACE_WARN_ON(ops->tramp_hash->count != ops->nr_trampolines))
+ pr_warn("count=%ld trampolines=%d\n",
+ ops->tramp_hash->count,
+ ops->nr_trampolines);
return 0;
}
* Filter_hash being empty will default to trace module.
* But notrace hash requires a test of individual module functions.
*/
- return ftrace_hash_empty(ops->filter_hash) &&
- ftrace_hash_empty(ops->notrace_hash);
+ return ftrace_hash_empty(ops->func_hash->filter_hash) &&
+ ftrace_hash_empty(ops->func_hash->notrace_hash);
}
/*
return 0;
/* The function must be in the filter */
- if (!ftrace_hash_empty(ops->filter_hash) &&
- !ftrace_lookup_ip(ops->filter_hash, rec->ip))
+ if (!ftrace_hash_empty(ops->func_hash->filter_hash) &&
+ !ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))
return 0;
/* If in notrace hash, we ignore it too */
- if (ftrace_lookup_ip(ops->notrace_hash, rec->ip))
+ if (ftrace_lookup_ip(ops->func_hash->notrace_hash, rec->ip))
return 0;
return 1;
} else {
rec = &iter->pg->records[iter->idx++];
if (((iter->flags & FTRACE_ITER_FILTER) &&
- !(ftrace_lookup_ip(ops->filter_hash, rec->ip))) ||
+ !(ftrace_lookup_ip(ops->func_hash->filter_hash, rec->ip))) ||
((iter->flags & FTRACE_ITER_NOTRACE) &&
- !ftrace_lookup_ip(ops->notrace_hash, rec->ip)) ||
+ !ftrace_lookup_ip(ops->func_hash->notrace_hash, rec->ip)) ||
((iter->flags & FTRACE_ITER_ENABLED) &&
!(rec->flags & FTRACE_FL_ENABLED))) {
* functions are enabled.
*/
if ((iter->flags & FTRACE_ITER_FILTER &&
- ftrace_hash_empty(ops->filter_hash)) ||
+ ftrace_hash_empty(ops->func_hash->filter_hash)) ||
(iter->flags & FTRACE_ITER_NOTRACE &&
- ftrace_hash_empty(ops->notrace_hash))) {
+ ftrace_hash_empty(ops->func_hash->notrace_hash))) {
if (*pos > 0)
return t_hash_start(m, pos);
iter->flags |= FTRACE_ITER_PRINTALL;
iter->ops = ops;
iter->flags = flag;
- mutex_lock(&ops->regex_lock);
+ mutex_lock(&ops->func_hash->regex_lock);
if (flag & FTRACE_ITER_NOTRACE)
- hash = ops->notrace_hash;
+ hash = ops->func_hash->notrace_hash;
else
- hash = ops->filter_hash;
+ hash = ops->func_hash->filter_hash;
if (file->f_mode & FMODE_WRITE) {
const int size_bits = FTRACE_HASH_DEFAULT_BITS;
file->private_data = iter;
out_unlock:
- mutex_unlock(&ops->regex_lock);
+ mutex_unlock(&ops->func_hash->regex_lock);
return ret;
}
{
.func = function_trace_probe_call,
.flags = FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(trace_probe_ops)
+ INIT_OPS_HASH(trace_probe_ops)
};
static int ftrace_probe_registered;
void *data)
{
struct ftrace_func_probe *entry;
- struct ftrace_hash **orig_hash = &trace_probe_ops.filter_hash;
+ struct ftrace_hash **orig_hash = &trace_probe_ops.func_hash->filter_hash;
struct ftrace_hash *hash;
struct ftrace_page *pg;
struct dyn_ftrace *rec;
if (WARN_ON(not))
return -EINVAL;
- mutex_lock(&trace_probe_ops.regex_lock);
+ mutex_lock(&trace_probe_ops.func_hash->regex_lock);
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash) {
out_unlock:
mutex_unlock(&ftrace_lock);
out:
- mutex_unlock(&trace_probe_ops.regex_lock);
+ mutex_unlock(&trace_probe_ops.func_hash->regex_lock);
free_ftrace_hash(hash);
return count;
struct ftrace_func_entry *rec_entry;
struct ftrace_func_probe *entry;
struct ftrace_func_probe *p;
- struct ftrace_hash **orig_hash = &trace_probe_ops.filter_hash;
+ struct ftrace_hash **orig_hash = &trace_probe_ops.func_hash->filter_hash;
struct list_head free_list;
struct ftrace_hash *hash;
struct hlist_node *tmp;
return;
}
- mutex_lock(&trace_probe_ops.regex_lock);
+ mutex_lock(&trace_probe_ops.func_hash->regex_lock);
hash = alloc_and_copy_ftrace_hash(FTRACE_HASH_DEFAULT_BITS, *orig_hash);
if (!hash)
mutex_unlock(&ftrace_lock);
out_unlock:
- mutex_unlock(&trace_probe_ops.regex_lock);
+ mutex_unlock(&trace_probe_ops.func_hash->regex_lock);
free_ftrace_hash(hash);
}
if (unlikely(ftrace_disabled))
return -ENODEV;
- mutex_lock(&ops->regex_lock);
+ mutex_lock(&ops->func_hash->regex_lock);
if (enable)
- orig_hash = &ops->filter_hash;
+ orig_hash = &ops->func_hash->filter_hash;
else
- orig_hash = &ops->notrace_hash;
+ orig_hash = &ops->func_hash->notrace_hash;
if (reset)
hash = alloc_ftrace_hash(FTRACE_HASH_DEFAULT_BITS);
mutex_unlock(&ftrace_lock);
out_regex_unlock:
- mutex_unlock(&ops->regex_lock);
+ mutex_unlock(&ops->func_hash->regex_lock);
free_ftrace_hash(hash);
return ret;
trace_parser_put(parser);
- mutex_lock(&iter->ops->regex_lock);
+ mutex_lock(&iter->ops->func_hash->regex_lock);
if (file->f_mode & FMODE_WRITE) {
filter_hash = !!(iter->flags & FTRACE_ITER_FILTER);
if (filter_hash)
- orig_hash = &iter->ops->filter_hash;
+ orig_hash = &iter->ops->func_hash->filter_hash;
else
- orig_hash = &iter->ops->notrace_hash;
+ orig_hash = &iter->ops->func_hash->notrace_hash;
mutex_lock(&ftrace_lock);
ret = ftrace_hash_move(iter->ops, filter_hash,
mutex_unlock(&ftrace_lock);
}
- mutex_unlock(&iter->ops->regex_lock);
+ mutex_unlock(&iter->ops->func_hash->regex_lock);
free_ftrace_hash(iter->hash);
kfree(iter);
static struct ftrace_ops global_ops = {
.func = ftrace_stub,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(global_ops)
};
static int __init ftrace_nodyn_init(void)
static struct ftrace_ops control_ops = {
.func = ftrace_ops_control_func,
.flags = FTRACE_OPS_FL_RECURSION_SAFE | FTRACE_OPS_FL_INITIALIZED,
- INIT_REGEX_LOCK(control_ops)
+ INIT_OPS_HASH(control_ops)
};
static inline void
#ifdef CONFIG_FUNCTION_GRAPH_TRACER
+static struct ftrace_ops graph_ops = {
+ .func = ftrace_stub,
+ .flags = FTRACE_OPS_FL_RECURSION_SAFE |
+ FTRACE_OPS_FL_INITIALIZED |
+ FTRACE_OPS_FL_STUB,
+#ifdef FTRACE_GRAPH_TRAMP_ADDR
+ .trampoline = FTRACE_GRAPH_TRAMP_ADDR,
+#endif
+ ASSIGN_OPS_HASH(graph_ops, &global_ops.local_hash)
+};
+
static int ftrace_graph_active;
int ftrace_graph_entry_stub(struct ftrace_graph_ent *trace)
*/
static void update_function_graph_func(void)
{
- if (ftrace_ops_list == &ftrace_list_end ||
- (ftrace_ops_list == &global_ops &&
- global_ops.next == &ftrace_list_end))
- ftrace_graph_entry = __ftrace_graph_entry;
- else
+ struct ftrace_ops *op;
+ bool do_test = false;
+
+ /*
+ * The graph and global ops share the same set of functions
+ * to test. If any other ops is on the list, then
+ * the graph tracing needs to test if its the function
+ * it should call.
+ */
+ do_for_each_ftrace_op(op, ftrace_ops_list) {
+ if (op != &global_ops && op != &graph_ops &&
+ op != &ftrace_list_end) {
+ do_test = true;
+ /* in double loop, break out with goto */
+ goto out;
+ }
+ } while_for_each_ftrace_op(op);
+ out:
+ if (do_test)
ftrace_graph_entry = ftrace_graph_entry_test;
+ else
+ ftrace_graph_entry = __ftrace_graph_entry;
}
static struct notifier_block ftrace_suspend_notifier = {
ftrace_graph_entry = ftrace_graph_entry_test;
update_function_graph_func();
- /* Function graph doesn't use the .func field of global_ops */
- global_ops.flags |= FTRACE_OPS_FL_STUB;
-
-#ifdef CONFIG_DYNAMIC_FTRACE
- /* Optimize function graph calling (if implemented by arch) */
- if (FTRACE_GRAPH_TRAMP_ADDR != 0)
- global_ops.trampoline = FTRACE_GRAPH_TRAMP_ADDR;
-#endif
-
- ret = ftrace_startup(&global_ops, FTRACE_START_FUNC_RET);
+ ret = ftrace_startup(&graph_ops, FTRACE_START_FUNC_RET);
out:
mutex_unlock(&ftrace_lock);
ftrace_graph_return = (trace_func_graph_ret_t)ftrace_stub;
ftrace_graph_entry = ftrace_graph_entry_stub;
__ftrace_graph_entry = ftrace_graph_entry_stub;
- ftrace_shutdown(&global_ops, FTRACE_STOP_FUNC_RET);
- global_ops.flags &= ~FTRACE_OPS_FL_STUB;
-#ifdef CONFIG_DYNAMIC_FTRACE
- if (FTRACE_GRAPH_TRAMP_ADDR != 0)
- global_ops.trampoline = 0;
-#endif
+ ftrace_shutdown(&graph_ops, FTRACE_STOP_FUNC_RET);
unregister_pm_notifier(&ftrace_suspend_notifier);
unregister_trace_sched_switch(ftrace_graph_probe_sched_switch, NULL);
work = &cpu_buffer->irq_work;
}
- work->waiters_pending = true;
poll_wait(filp, &work->waiters, poll_table);
+ work->waiters_pending = true;
+ /*
+ * There's a tight race between setting the waiters_pending and
+ * checking if the ring buffer is empty. Once the waiters_pending bit
+ * is set, the next event will wake the task up, but we can get stuck
+ * if there's only a single event in.
+ *
+ * FIXME: Ideally, we need a memory barrier on the writer side as well,
+ * but adding a memory barrier to all events will cause too much of a
+ * performance hit in the fast path. We only need a memory barrier when
+ * the buffer goes from empty to having content. But as this race is
+ * extremely small, and it's not a problem if another event comes in, we
+ * will fix it later.
+ */
+ smp_mb();
if ((cpu == RING_BUFFER_ALL_CPUS && !ring_buffer_empty(buffer)) ||
(cpu != RING_BUFFER_ALL_CPUS && !ring_buffer_empty_cpu(buffer, cpu)))
priv->lane2_ops = NULL;
if (priv->lane_version > 1)
priv->lane2_ops = &lane2_ops;
+ rtnl_lock();
if (dev_set_mtu(dev, mesg->content.config.mtu))
pr_info("%s: change_mtu to %d failed\n",
dev->name, mesg->content.config.mtu);
+ rtnl_unlock();
priv->is_proxy = mesg->content.config.is_proxy;
break;
case l_flush_tran_id:
/* Reached the end of the list, so insert after 'frag_entry_last'. */
if (likely(frag_entry_last)) {
- hlist_add_behind(&frag_entry_last->list, &frag_entry_new->list);
+ hlist_add_behind(&frag_entry_new->list, &frag_entry_last->list);
chain->size += skb->len - hdr_size;
chain->timestamp = jiffies;
ret = true;
if (dst->flags & DST_HOST) {
mp = dst_metrics_write_ptr(dst);
} else {
- mp = kzalloc(sizeof(u32) * RTAX_MAX, GFP_KERNEL);
+ mp = kzalloc(sizeof(u32) * RTAX_MAX, GFP_ATOMIC);
if (!mp)
return -ENOMEM;
dst_init_metrics(dst, mp, 0);
list_del(&sdata->reserved_chanctx_list);
list_move(&sdata->assigned_chanctx_list,
- &new_ctx->assigned_vifs);
+ &ctx->assigned_vifs);
sdata->reserved_chanctx = NULL;
ieee80211_vif_chanctx_reservation_complete(sdata);
static int make_writable(struct sk_buff *skb, int write_len)
{
+ if (!pskb_may_pull(skb, write_len))
+ return -ENOMEM;
+
if (!skb_cloned(skb) || skb_clone_writable(skb, write_len))
return 0;
vlan_set_encap_proto(skb, vhdr);
skb->mac_header += VLAN_HLEN;
+ if (skb_network_offset(skb) < ETH_HLEN)
+ skb_set_network_header(skb, ETH_HLEN);
skb_reset_mac_len(skb);
return 0;
p1->tov_in_jiffies = msecs_to_jiffies(p1->retire_blk_tov);
p1->blk_sizeof_priv = req_u->req3.tp_sizeof_priv;
+ p1->max_frame_len = p1->kblk_size - BLK_PLUS_PRIV(p1->blk_sizeof_priv);
prb_init_ft_ops(p1, req_u);
prb_setup_retire_blk_timer(po, tx_ring);
prb_open_block(p1, pbd);
if ((int)snaplen < 0)
snaplen = 0;
}
+ } else if (unlikely(macoff + snaplen >
+ GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len)) {
+ u32 nval;
+
+ nval = GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len - macoff;
+ pr_err_once("tpacket_rcv: packet too big, clamped from %u to %u. macoff=%u\n",
+ snaplen, nval, macoff);
+ snaplen = nval;
+ if (unlikely((int)snaplen < 0)) {
+ snaplen = 0;
+ macoff = GET_PBDQC_FROM_RB(&po->rx_ring)->max_frame_len;
+ }
}
spin_lock(&sk->sk_receive_queue.lock);
h.raw = packet_current_rx_frame(po, skb,
goto out;
if (unlikely(req->tp_block_size & (PAGE_SIZE - 1)))
goto out;
+ if (po->tp_version >= TPACKET_V3 &&
+ (int)(req->tp_block_size -
+ BLK_PLUS_PRIV(req_u->req3.tp_sizeof_priv)) <= 0)
+ goto out;
if (unlikely(req->tp_frame_size < po->tp_hdrlen +
po->tp_reserve))
goto out;
char *pkblk_start;
char *pkblk_end;
int kblk_size;
+ unsigned int max_frame_len;
unsigned int knum_blocks;
uint64_t knxt_seq_num;
char *prev;
struct cbq_class *tx_borrowed;
int tx_len;
psched_time_t now; /* Cached timestamp */
- psched_time_t now_rt; /* Cached real time */
unsigned int pmask;
struct hrtimer delay_timer;
int toplevel = q->toplevel;
if (toplevel > cl->level && !(qdisc_is_throttled(cl->q))) {
- psched_time_t now;
- psched_tdiff_t incr;
-
- now = psched_get_time();
- incr = now - q->now_rt;
- now = q->now + incr;
+ psched_time_t now = psched_get_time();
do {
if (cl->undertime < now) {
struct cbq_class *this = q->tx_class;
struct cbq_class *cl = this;
int len = q->tx_len;
+ psched_time_t now;
q->tx_class = NULL;
+ /* Time integrator. We calculate EOS time
+ * by adding expected packet transmission time.
+ */
+ now = q->now + L2T(&q->link, len);
for ( ; cl; cl = cl->share) {
long avgidle = cl->avgidle;
* idle = (now - last) - last_pktlen/rate
*/
- idle = q->now - cl->last;
+ idle = now - cl->last;
if ((unsigned long)idle > 128*1024*1024) {
avgidle = cl->maxidle;
} else {
idle -= L2T(&q->link, len);
idle += L2T(cl, len);
- cl->undertime = q->now + idle;
+ cl->undertime = now + idle;
} else {
/* Underlimit */
else
cl->avgidle = avgidle;
}
- cl->last = q->now;
+ if ((s64)(now - cl->last) > 0)
+ cl->last = now;
}
cbq_update_toplevel(q, this, q->tx_borrowed);
struct sk_buff *skb;
struct cbq_sched_data *q = qdisc_priv(sch);
psched_time_t now;
- psched_tdiff_t incr;
now = psched_get_time();
- incr = now - q->now_rt;
-
- if (q->tx_class) {
- psched_tdiff_t incr2;
- /* Time integrator. We calculate EOS time
- * by adding expected packet transmission time.
- * If real time is greater, we warp artificial clock,
- * so that:
- *
- * cbq_time = max(real_time, work);
- */
- incr2 = L2T(&q->link, q->tx_len);
- q->now += incr2;
+
+ if (q->tx_class)
cbq_update(q);
- if ((incr -= incr2) < 0)
- incr = 0;
- q->now += incr;
- } else {
- if (now > q->now)
- q->now = now;
- }
- q->now_rt = now;
+
+ q->now = now;
for (;;) {
q->wd_expires = 0;
hrtimer_cancel(&q->delay_timer);
q->toplevel = TC_CBQ_MAXLEVEL;
q->now = psched_get_time();
- q->now_rt = q->now;
for (prio = 0; prio <= TC_CBQ_MAXPRIO; prio++)
q->active[prio] = NULL;
q->delay_timer.function = cbq_undelay;
q->toplevel = TC_CBQ_MAXLEVEL;
q->now = psched_get_time();
- q->now_rt = q->now;
cbq_link_class(&q->link);
else {
dst_release(transport->dst);
transport->dst = NULL;
+ ulp_notify = false;
}
spc_state = SCTP_ADDR_UNREACHABLE;
{
u8 score_curr, score_best;
- if (best == NULL)
+ if (best == NULL || curr == best)
return curr;
score_curr = sctp_trans_score(curr);
trans_sec = trans_pri;
/* If we failed to find a usable transport, just camp on the
- * primary or retran, even if they are inactive, if possible
- * pick a PF iff it's the better choice.
+ * active or pick a PF iff it's the better choice.
*/
if (trans_pri == NULL) {
- trans_pri = sctp_trans_elect_best(asoc->peer.primary_path,
- asoc->peer.retran_path);
- trans_pri = sctp_trans_elect_best(trans_pri, trans_pf);
- trans_sec = asoc->peer.primary_path;
+ trans_pri = sctp_trans_elect_best(asoc->peer.active_path, trans_pf);
+ trans_sec = trans_pri;
}
/* Set the active and retran transports. */
return msg_importance(&port->phdr);
}
-static inline void tipc_port_set_importance(struct tipc_port *port, int imp)
+static inline int tipc_port_set_importance(struct tipc_port *port, int imp)
{
+ if (imp > TIPC_CRITICAL_IMPORTANCE)
+ return -EINVAL;
msg_set_importance(&port->phdr, (u32)imp);
+ return 0;
}
#endif
switch (opt) {
case TIPC_IMPORTANCE:
- tipc_port_set_importance(port, value);
+ res = tipc_port_set_importance(port, value);
break;
case TIPC_SRC_DROPPABLE:
if (sock->type != SOCK_STREAM)
$prototype =~ s/^noinline +//;
$prototype =~ s/__init +//;
$prototype =~ s/__init_or_module +//;
+ $prototype =~ s/__meminit +//;
$prototype =~ s/__must_check +//;
$prototype =~ s/__weak +//;
my $define = $prototype =~ s/^#\s*define\s+//; #ak added