Since we have to be able to handle MD updates, having an in-tree
set of data structures representing the MD objects actually makes
things more painful.
The MD itself is easy to parse, and we can implement the existing
interfaces using direct parsing of the MD binary image.
The MD is now reference counted, so accesses have to now take the
form:
handle = mdesc_grab();
... operations on MD ...
mdesc_release(handle);
The only remaining issue are cases where code holds on to references
to MD property values. mdesc_get_property() returns a direct pointer
to the property value, most cases just pull in the information they
need and discard the pointer, but there are few that use the pointer
directly over a long lifetime. Those will be fixed up in a subsequent
changeset.
A preliminary handler for MD update events from domain services is
there, it is rudimentry but it works and handles all of the reference
counting. It does not check the generation number of the MDs,
and it does not generate a "add/delete" list for notification to
interesting parties about MD changes but that will be forthcoming.
Signed-off-by: David S. Miller <davem@davemloft.net>
All of the interrupts say "LDX RX" and "LDX TX" currently
which is next to useless. Put a device specific prefix
before "RX" and "TX" instead which makes it much more
useful.
Signed-off-by: David S. Miller <davem@davemloft.net>
Besides the existing usage for power-button interrupts, we'll
want to make use of this code for domain-services where the
LDOM manager can send reboot requests to the guest node.
Signed-off-by: David S. Miller <davem@davemloft.net>
1) LDC_MODE_RELIABLE is deprecated an unused by anything, plus
it and LDC_MODE_STREAM were mis-numbered.
2) read_stream() should try to read as much as possible into
the per-LDC stream buffer area, so do not trim the read_nonraw()
length by the caller's size parameter.
3) Send data ACKs when necessary in read_nonraw().
4) In read_nonraw() when we get a pure ACK, advance the RX head
unconditionally past it.
5) Provide the ACKID field in the ldcdgb() packet dump in read_nonraw().
This helps debugging stream mode LDC channel problems.
6) Decrease verbosity of rx_data_wait() so that it is more useful.
A debugging message each loop iteration is too much.
7) In process_data_ack() stop the loop checking when we hit lp->tx_tail
not lp->tx_head.
8) Set the seqid field properly in send_data_nack().
Signed-off-by: David S. Miller <davem@davemloft.net>
This is also a partial workaround for a bug in the LDOM firmware which
double-transmits RX inos during high load. Without this, such an
event causes the kernel to loop forever in the interrupt call chain
ACK'ing but never actually running the IRQ handler (and thus clearing
the interrupt condition in the device).
There is still a bad potential effect when double INOs occur,
not covered by this changeset. Namely, if the INO is already on
the per-cpu INO vector list, we still blindly re-insert it and
thus we can end up losing interrupts already linked in after
it.
We could deal with that by traversing the list before insertion,
but that's too expensive for this edge case.
Signed-off-by: David S. Miller <davem@davemloft.net>
Virtual devices on Sun Logical Domains are built on top
of a virtual channel framework. This, with help of hypervisor
interfaces, provides a link layer protocol with basic
handshaking over which virtual device clients and servers
communicate.
Built on top of this is a VIO device protocol which has it's
own handshaking and message types. At this layer attributes
are exchanged (disk size, network device addresses, etc.)
descriptor rings are registered, and data transfers are
triggers and replied to.
Signed-off-by: David S. Miller <davem@davemloft.net>
Currently there are 97 occurrences where drivers need the pci
revision ID. We can do this once for all devices. Even the pci
subsystem needs the revision several times for quirks. The extra
u8 member pads out nicely in the pci_dev struct.
Signed-off-by: Auke Kok <auke-jan.h.kok@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
the SMP load-balancer uses the boot-time migration-cost estimation
code to attempt to improve the quality of balancing. The reason for
this code is that the discrete priority queues do not preserve
the order of scheduling accurately, so the load-balancer skips
tasks that were running on a CPU 'recently'.
this code is fundamental fragile: the boot-time migration cost detector
doesnt really work on systems that had large L3 caches, it caused boot
delays on large systems and the whole cache-hot concept made the
balancing code pretty undeterministic as well.
(and hey, i wrote most of it, so i can say it out loud that it sucks ;-)
under CFS the same purpose of cache affinity can be achieved without
any special cache-hot special-case: tasks are sorted in the 'timeline'
tree and the SMP balancer picks tasks from the left side of the
tree, thus the most cache-cold task is balanced automatically.
Signed-off-by: Ingo Molnar <mingo@elte.hu>
We were doing the wrong call to turn them on, and also
when enabling we need to forcefully set the state to IDLE.
Signed-off-by: David S. Miller <davem@davemloft.net>
In pci_determine_mem_io_space(), do not hard code the region sizes.
Instead, use the values given to us in the ranges property.
Thanks goes to Mikael Petterson for the original Xorg failure
bug repoert, and strace dumps from Mikael and Dmitry Artamonow.
Signed-off-by: David S. Miller <davem@davemloft.net>
This fixes the IDE controller not showing up on Netra-T1
systems.
Just like Simba bridges, some PCI bridges can lack the
'ranges' OBP property. So we handle this similarly to
the existing Simba code:
1) In of_device register address resolving, we push the
translation to the parent.
2) In PCI device scanning, we interrogate the PCI config
space registers of the PCI bus device in order to resolve
the resources, just like the generic Linux PCI probing
code does.
With much help and testing from Fabio, who also reported
the initial problem.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Fabio Massimo Di Nitto <fabbione@ubuntu.com>
To be consistent with other architectures, include the generic version
of rwsem.h.
Signed-off-by: Robert P. J. Day <rpjday@mindspring.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
We used to access the 64-bit IRQ IMAP and ICLR registers of bus
controllers 4-bytes in and as a 32-bit register word, since only the
low 32-bits were relevant. This seemed like a good idea at the time.
But the PCI-E controller requires full 8-byte 64-bit access to
these registers, so we switched over to accessing them fully.
SBUS was not adjusted properly, which broke interrupts completely.
Signed-off-by: David S. Miller <davem@davemloft.net>
If we are on hummingbird, bus runs at 66MHZ.
pbm->pci_bus should be setup with the result of pci_scan_one_pbm()
or else we deref NULL pointers in the error interrupt handlers.
Signed-off-by: David S. Miller <davem@davemloft.net>
It's not just sun4v hypervisor platforms that should return true
for this, sun4u with UltraSPARC-IV should return true too.
Signed-off-by: David S. Miller <davem@davemloft.net>
The scheduling domain hierarchy is:
all cpus -->
cpus that share an instruction cache -->
cpus that share an integer execution unit
Signed-off-by: David S. Miller <davem@davemloft.net>
If the system supports hypervisor based statistics, allow them to
be fetched, enabled, and disabled via sysfs.
Enable and disable via the boolean:
/sys/devices/systems/cpu/cpuN/mmustat_enable
Statistic values are provided under:
/sys/devices/systems/cpu/cpuN/mmu_status/
Signed-off-by: David S. Miller <davem@davemloft.net>
Also, use per-cpu data for struct cpu. Calling kmalloc for
each cpu in topology_init() is just plain clumsy.
Signed-off-by: David S. Miller <davem@davemloft.net>
The RO_DATA section were hardcoded to a specific
alignment in include/asm-generic/vmlinux.h.
But for sparc64 this did not match the PAGE_SIZE.
Introduce a new section definition named:
RO_DATA that takes actual alignment as parameter.
RODATA are provided for backward compatibility.
On top of this avoid hardcoding alignment for
sparc64 in reset of the script
Fix is build-tested on sparc64 + x86_64.
Signed-off-by: Sam Ravnborg <sam@ravnborg.org>
Several interfaces were missing and others misnumbered or
improperly documented.
Also, make sure to check the return value when registering
the kernel TSBs with the hypervisor. This helped to find
the 4MB kernel TSB alignment bug fixed in a previous changeset.
Signed-off-by: David S. Miller <davem@davemloft.net>
1) The TSB lookup was not using the correct hash mask.
2) It was not aligned on a boundary equal to it's size,
which is required by the sun4v Hypervisor.
wasn't having it's return value checked, and that bug will be fixed up
as well in a subsequent changeset.
Signed-off-by: David S. Miller <davem@davemloft.net>
It was using an immediate _PAGE_EXEC_4U value in an 'and'
instruction to perform the test. This doesn't work because
the immediate field is signed 13-bit, this the mask being
tested against the PTE was 0x1000 sign-extended to 32-bits
instead of just plain 0x1000.
Signed-off-by: David S. Miller <davem@davemloft.net>
This is bug 8540 on bugzilla.kernel.org
arch/sparc64/time.c contains references to assorted bq4802 stuff if
CONFIG_PCI is not set, and compile fails. I #ifdef'ed out everything
that looks PCI-ish in that file.
Signed-off-by: David S. Miller <davem@davemloft.net>
Cheetah systems can have cpuids as large as 1023, although physical
systems don't have that many cpus.
Only three limitations existed in the kernel preventing arbitrary
NR_CPUS values:
1) dcache dirty cpu state stored in page->flags on
D-cache aliasing platforms. With some build time
calculations and some build-time BUG checks on
page->flags layout, this one was easily solved.
2) The cheetah XCALL delivery code could only handle
a cpumask with up to 32 cpus set. Some simple looping
logic clears that up too.
3) thread_info->cpu was a u8, easily changed to a u16.
There are a few spots in the kernel that still put NR_CPUS
sized arrays on the kernel stack, but that's not a sparc64
specific problem.
Signed-off-by: David S. Miller <davem@davemloft.net>
These messages were very useful when bringing up the
OBP based PCI device scan code, but it's just a lot
of noise every bootup now especially on big machines.
The messages can be re-enabled via 'ofpci_debug=1' on
the kernel command line.
Signed-off-by: David S. Miller <davem@davemloft.net>
Handle arbitrary base and length values as long as they
are multiples of IO_PAGE_SIZE.
Bug found by Arun Kumar Rao.
Signed-off-by: David S. Miller <davem@davemloft.net>
Hypervisor interfaces need to be negotiated in order to use
some API calls reliably. So add a small set of interfaces
to request API versions and query current settings.
This allows us to fix some bugs in the hypervisor console:
1) If we can negotiate API group CORE of at least major 1
minor 1 we can use con_read and con_write which can improve
console performance quite a bit.
2) When we do a console write request, we should hold the
spinlock around the whole request, not a byte at a time.
What would happen is that it's easy for output from
different cpus to get mixed with each other.
3) Use consistent udelay() based polling, udelay(1) each
loop with a limit of 1000 polls to handle stuck hypervisor
console.
Signed-off-by: David S. Miller <davem@davemloft.net>